258,754 research outputs found
Comparing Neural and Attractiveness-based Visual Features for Artwork Recommendation
Advances in image processing and computer vision in the latest years have
brought about the use of visual features in artwork recommendation. Recent
works have shown that visual features obtained from pre-trained deep neural
networks (DNNs) perform very well for recommending digital art. Other recent
works have shown that explicit visual features (EVF) based on attractiveness
can perform well in preference prediction tasks, but no previous work has
compared DNN features versus specific attractiveness-based visual features
(e.g. brightness, texture) in terms of recommendation performance. In this
work, we study and compare the performance of DNN and EVF features for the
purpose of physical artwork recommendation using transactional data from
UGallery, an online store of physical paintings. In addition, we perform an
exploratory analysis to understand if DNN embedded features have some relation
with certain EVF. Our results show that DNN features outperform EVF, that
certain EVF features are more suited for physical artwork recommendation and,
finally, we show evidence that certain neurons in the DNN might be partially
encoding visual features such as brightness, providing an opportunity for
explaining recommendations based on visual neural models.Comment: DLRS 2017 workshop, co-located at RecSys 201
On Popularity Bias of Multimodal-aware Recommender Systems: a Modalities-driven Analysis
Multimodal-aware recommender systems (MRSs) exploit multimodal content (e.g.,
product images or descriptions) as items' side information to improve
recommendation accuracy. While most of such methods rely on factorization
models (e.g., MFBPR) as base architecture, it has been shown that MFBPR may be
affected by popularity bias, meaning that it inherently tends to boost the
recommendation of popular (i.e., short-head) items at the detriment of niche
(i.e., long-tail) items from the catalog. Motivated by this assumption, in this
work, we provide one of the first analyses on how multimodality in
recommendation could further amplify popularity bias. Concretely, we evaluate
the performance of four state-of-the-art MRSs algorithms (i.e., VBPR, MMGCN,
GRCN, LATTICE) on three datasets from Amazon by assessing, along with
recommendation accuracy metrics, performance measures accounting for the
diversity of recommended items and the portion of retrieved niche items. To
better investigate this aspect, we decide to study the separate influence of
each modality (i.e., visual and textual) on popularity bias in different
evaluation dimensions. Results, which demonstrate how the single modality may
augment the negative effect of popularity bias, shed light on the importance to
provide a more rigorous analysis of the performance of such models
Reducing the Visual Signature of the M4A1 Rifle
The Maneuver Center of Excellence (MCoE) presented a directive to reduce the visual signature for small arms weapons by altering the color of the M4A1 rifle from its traditional black color. This research utilizes the Systems Decision Process (SDP) to develop and analyze alternatives to create a feasible and permanent solution to reduce the weapon’s visual signature. The research consisted of an extensive stakeholder and functional analysis to develop a value model and framework that provides a values-based recommendation. The model establishes an optimal color change process that accounts for the design and performance characteristics of the weapon system and the stakeholder’s values. The research also analyzes the potential integration of short wave infrared (SWIR) mitigation into the new color of the weapon. This analysis will establish a baseline methodology for weapon color change for all Army small arms weapons
Hierarchical Attention Network for Visually-aware Food Recommendation
Food recommender systems play an important role in assisting users to
identify the desired food to eat. Deciding what food to eat is a complex and
multi-faceted process, which is influenced by many factors such as the
ingredients, appearance of the recipe, the user's personal preference on food,
and various contexts like what had been eaten in the past meals. In this work,
we formulate the food recommendation problem as predicting user preference on
recipes based on three key factors that determine a user's choice on food,
namely, 1) the user's (and other users') history; 2) the ingredients of a
recipe; and 3) the descriptive image of a recipe. To address this challenging
problem, we develop a dedicated neural network based solution Hierarchical
Attention based Food Recommendation (HAFR) which is capable of: 1) capturing
the collaborative filtering effect like what similar users tend to eat; 2)
inferring a user's preference at the ingredient level; and 3) learning user
preference from the recipe's visual images. To evaluate our proposed method, we
construct a large-scale dataset consisting of millions of ratings from
AllRecipes.com. Extensive experiments show that our method outperforms several
competing recommender solutions like Factorization Machine and Visual Bayesian
Personalized Ranking with an average improvement of 12%, offering promising
results in predicting user preference for food. Codes and dataset will be
released upon acceptance
- …