2,876 research outputs found
Leveraging Deep Learning Techniques on Collaborative Filtering Recommender Systems
With the exponentially increasing volume of online data, searching and
finding required information have become an extensive and time-consuming task.
Recommender Systems as a subclass of information retrieval and decision support
systems by providing personalized suggestions helping users access what they
need more efficiently. Among the different techniques for building a
recommender system, Collaborative Filtering (CF) is the most popular and
widespread approach. However, cold start and data sparsity are the fundamental
challenges ahead of implementing an effective CF-based recommender. Recent
successful developments in enhancing and implementing deep learning
architectures motivated many studies to propose deep learning-based solutions
for solving the recommenders' weak points. In this research, unlike the past
similar works about using deep learning architectures in recommender systems
that covered different techniques generally, we specifically provide a
comprehensive review of deep learning-based collaborative filtering recommender
systems. This in-depth filtering gives a clear overview of the level of
popularity, gaps, and ignored areas on leveraging deep learning techniques to
build CF-based systems as the most influential recommenders.Comment: 24 pages, 14 figure
Sustainable Transparency in Recommender Systems: Bayesian Ranking of Images for Explainability
Recommender Systems have become crucial in the modern world, commonly guiding
users towards relevant content or products, and having a large influence over
the decisions of users and citizens. However, ensuring transparency and user
trust in these systems remains a challenge; personalized explanations have
emerged as a solution, offering justifications for recommendations. Among the
existing approaches for generating personalized explanations, using visual
content created by the users is one particularly promising option, showing a
potential to maximize transparency and user trust. Existing models for
explaining recommendations in this context face limitations: sustainability has
been a critical concern, as they often require substantial computational
resources, leading to significant carbon emissions comparable to the
Recommender Systems where they would be integrated. Moreover, most models
employ surrogate learning goals that do not align with the objective of ranking
the most effective personalized explanations for a given recommendation,
leading to a suboptimal learning process and larger model sizes. To address
these limitations, we present BRIE, a novel model designed to tackle the
existing challenges by adopting a more adequate learning goal based on Bayesian
Pairwise Ranking, enabling it to achieve consistently superior performance than
state-of-the-art models in six real-world datasets, while exhibiting remarkable
efficiency, emitting up to 75% less CO during training and inference with
a model up to 64 times smaller than previous approaches
Memory-Aware Attentive Control for Community Question Answering With Knowledge-Based Dual Refinement
Exploring research trends with Rexplore
Current systems for exploring scholarly data exhibit a number of shortcomings in their ability to facilitate the identification of research trends and identify 'interesting' connections between researchers. To address these issues we have developed Rexplore, a novel system which combines statistics, human-computer interaction, and semantic technologies, to support knowledge-based exploration and visualization of scholarly data. In this paper we focus on the functionalities provided by Rexplore for visualizing research trends and we use as an example research in "Social Networks", which has experienced dramatic growth in the years 2000-2010
Explainable recommender with geometric information bottleneck
Explainable recommender systems can explain their recommendation decisions, enhancing user trust in the systems. Most explainable recommender systems either rely on human-annotated rationales to train models for explanation generation or leverage the attention mechanism to extract important text spans from reviews as explanations. The extracted rationales are often confined to an individual review and may fail to identify the implicit features beyond the review text. To avoid the expensive human annotation process and to generate explanations beyond individual reviews, we propose to incorporate a geometric prior learnt from user-item interactions into a variational network which infers latent factors from user-item reviews. The latent factors from an individual user-item pair can be used for both recommendation and explanation generation, which naturally inherit the global characteristics encoded in the prior knowledge. Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender using the Wasserstein distance while achieving performance comparable to existing content-based recommender systems in terms of recommendation behaviours
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
GPT-4V(ision) as A Social Media Analysis Engine
Recent research has offered insights into the extraordinary capabilities of
Large Multimodal Models (LMMs) in various general vision and language tasks.
There is growing interest in how LMMs perform in more specialized domains.
Social media content, inherently multimodal, blends text, images, videos, and
sometimes audio. Understanding social multimedia content remains a challenging
problem for contemporary machine learning frameworks. In this paper, we explore
GPT-4V(ision)'s capabilities for social multimedia analysis. We select five
representative tasks, including sentiment analysis, hate speech detection, fake
news identification, demographic inference, and political ideology detection,
to evaluate GPT-4V. Our investigation begins with a preliminary quantitative
analysis for each task using existing benchmark datasets, followed by a careful
review of the results and a selection of qualitative samples that illustrate
GPT-4V's potential in understanding multimodal social media content. GPT-4V
demonstrates remarkable efficacy in these tasks, showcasing strengths such as
joint understanding of image-text pairs, contextual and cultural awareness, and
extensive commonsense knowledge. Despite the overall impressive capacity of
GPT-4V in the social media domain, there remain notable challenges. GPT-4V
struggles with tasks involving multilingual social multimedia comprehension and
has difficulties in generalizing to the latest trends in social media.
Additionally, it exhibits a tendency to generate erroneous information in the
context of evolving celebrity and politician knowledge, reflecting the known
hallucination problem. The insights gleaned from our findings underscore a
promising future for LMMs in enhancing our comprehension of social media
content and its users through the analysis of multimodal information
- …