10,038 research outputs found
IceBreaker: Solving Cold Start Problem for Video Recommendation Engines
Internet has brought about a tremendous increase in content of all forms and,
in that, video content constitutes the major backbone of the total content
being published as well as watched. Thus it becomes imperative for video
recommendation engines such as Hulu to look for novel and innovative ways to
recommend the newly added videos to their users. However, the problem with new
videos is that they lack any sort of metadata and user interaction so as to be
able to rate the videos for the consumers. To this effect, this paper
introduces the several techniques we develop for the Content Based Video
Relevance Prediction (CBVRP) Challenge being hosted by Hulu for the ACM
Multimedia Conference 2018. We employ different architectures on the CBVRP
dataset to make use of the provided frame and video level features and generate
predictions of videos that are similar to the other videos. We also implement
several ensemble strategies to explore complementarity between both the types
of provided features. The obtained results are encouraging and will impel the
boundaries of research for multimedia based video recommendation systems
How to combine visual features with tags to improve movie recommendation accuracy?
Previous works have shown the effectiveness of using stylistic visual features, indicative of the movie style, in content-based movie recommendation. However, they have mainly focused on a particular recommendation scenario, i.e., when a new movie is added to the catalogue and no information is available for that movie (New Item scenario). However, the stylistic visual features can be also used when other sources of information is available (Existing Item scenario). In this work, we address the second scenario and propose a hybrid technique that exploits not only the typical content available for the movies (e.g., tags), but also the stylistic visual content extracted form the movie files and fuse them by applying a fusion method called Canonical Correlation Analysis (CCA). Our experiments on a large catalogue of 13K movies have shown very promising results which indicates a considerable improvement of the recommendation quality by using a proper fusion of the stylistic visual features with other type of features
Who is the director of this movie? Automatic style recognition based on shot features
We show how low-level formal features, such as shot duration, meant as length
of camera takes, and shot scale, i.e. the distance between the camera and the
subject, are distinctive of a director's style in art movies. So far such
features were thought of not having enough varieties to become distinctive of
an author. However our investigation on the full filmographies of six different
authors (Scorsese, Godard, Tarr, Fellini, Antonioni, and Bergman) for a total
number of 120 movies analysed second by second, confirms that these
shot-related features do not appear as random patterns in movies from the same
director. For feature extraction we adopt methods based on both conventional
and deep learning techniques. Our findings suggest that feature sequential
patterns, i.e. how features evolve in time, are at least as important as the
related feature distributions. To the best of our knowledge this is the first
study dealing with automatic attribution of movie authorship, which opens up
interesting lines of cross-disciplinary research on the impact of style on the
aesthetic and emotional effects on the viewers
Exploring the Semantic Gap for Movie Recommendations
In the last years, there has been much attention given to the semantic gap problem in multimedia retrieval systems. Much effort has been devoted to bridge this gap by building tools for the extraction of high-level, semantics-based features from multimedia content, as low-level features are not considered useful because they deal primarily with representing the perceived content rather than the semantics of it.
In this paper, we explore a different point of view by leveraging the gap between low-level and high-level features. We experiment with a recent approach for movie recommendation that extract low-level Mise-en-Scéne features from multimedia content and combine it with high-level features provided by the wisdom of the crowd.
To this end, we first performed an offline performance assessment by implementing a pure content-based recommender system with three different versions of the same algorithm, respectively based on (i) conventional movie attributes, (ii) mise-en-scene features, and (iii) a hybrid method that interleaves recommendations based on movie attributes and mise-en-scene features. In a second study, we designed an empirical study involving 100 subjects and collected data regarding the quality perceived by the users. Results from both studies show that the introduction of mise-en-scéne features in conjunction with traditional movie attributes improves both offline and online quality of recommendations
Movies and meaning: from low-level features to mind reading
When dealing with movies, closing the tremendous discontinuity between low-level features and the richness of semantics in the viewers' cognitive processes, requires a variety of approaches and different perspectives. For instance when attempting to relate movie content to users' affective
responses, previous work suggests that a direct mapping of audio-visual properties into elicited emotions is difficult, due to the high variability of individual reactions. To reduce the gap between the objective level of features and the subjective sphere of emotions, we exploit the intermediate
representation of the connotative properties of movies: the set of shooting and editing conventions that help in transmitting meaning to the audience. One of these stylistic feature, the shot scale, i.e. the distance of the camera from the subject, effectively regulates theory of mind, indicating
that increasing spatial proximity to the character triggers higher occurrence of mental state references in viewers' story descriptions. Movies are also becoming an important stimuli employed in neural decoding, an ambitious line of research within contemporary neuroscience aiming at "mindreading".
In this field we address the challenge of producing decoding models for the reconstruction of perceptual contents by combining fMRI data and deep features in a hybrid model able to predict specific video object classes
Personalized Video Recommendation Using Rich Contents from Videos
Video recommendation has become an essential way of helping people explore
the massive videos and discover the ones that may be of interest to them. In
the existing video recommender systems, the models make the recommendations
based on the user-video interactions and single specific content features. When
the specific content features are unavailable, the performance of the existing
models will seriously deteriorate. Inspired by the fact that rich contents
(e.g., text, audio, motion, and so on) exist in videos, in this paper, we
explore how to use these rich contents to overcome the limitations caused by
the unavailability of the specific ones. Specifically, we propose a novel
general framework that incorporates arbitrary single content feature with
user-video interactions, named as collaborative embedding regression (CER)
model, to make effective video recommendation in both in-matrix and
out-of-matrix scenarios. Our extensive experiments on two real-world
large-scale datasets show that CER beats the existing recommender models with
any single content feature and is more time efficient. In addition, we propose
a priority-based late fusion (PRI) method to gain the benefit brought by the
integrating the multiple content features. The corresponding experiment shows
that PRI brings real performance improvement to the baseline and outperforms
the existing fusion methods
- …