155 research outputs found

    Capturing User Interests for Content-based Recommendations

    Get PDF
    Nowadays, most recommender systems provide recommendations by either exploiting feedback given by similar users, referred to as collaborative filtering, or by identifying items with similar properties, referred to as content-based recommendation. Focusing on the latter, this keynote presents various examples and case studies that illustrate both strengths and weaknesses of content-based recommendatio

    Interactive Video Search

    Get PDF
    With an increasing amount of video data in our daily life, the need for content-based search in videos increases as well. Though a lot of research has been spent on video retrieval tools and methods which allow for automatic search in videos through content-based queries, still the performance of automatic video retrieval is far from optimal. In this tutorial we discussed (i) proposed solutions for improved video content navigation, (ii) typical interaction of content-based querying features, and (iii) advanced video content visualization methods. Moreover, we discussed interactive video search systems and ways to evaluate their performance

    Beyond single-shot text queries: bridging the gap(s) between research communities

    Get PDF
    This workshop brings together researchers from different streams and communities that deal with information access in the widest sense. The general goal is to foster collaboration between the different communities and to showcase research that sits at the border between different areas of research

    Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos

    Get PDF
    When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation

    Personalised video retrieval: application of implicit feedback and semantic user profiles

    Get PDF
    A challenging problem in the user profiling domain is to create profiles of users of retrieval systems. This problem even exacerbates in the multimedia domain. Due to the Semantic Gap, the difference between low-level data representation of videos and the higher concepts users associate with videos, it is not trivial to understand the content of multimedia documents and to find other documents that the users might be interested in. A promising approach to ease this problem is to set multimedia documents into their semantic contexts. The semantic context can lead to a better understanding of the personal interests. Knowing the context of a video is useful for recommending users videos that match their information need. By exploiting these contexts, videos can also be linked to other, contextually related videos. From a user profiling point of view, these links can be of high value to recommend semantically related videos, hence creating a semantic-based user profile. This thesis introduces a semantic user profiling approach for news video retrieval, which exploits a generic ontology to put news stories into its context. Major challenges which inhibit the creation of such semantic user profiles are the identification of user's long-term interests and the adaptation of retrieval results based on these personal interests. Most personalisation services rely on users explicitly specifying preferences, a common approach in the text retrieval domain. By giving explicit feedback, users are forced to update their need, which can be problematic when their information need is vague. Furthermore, users tend not to provide enough feedback on which to base an adaptive retrieval algorithm. Deviating from the method of explicitly asking the user to rate the relevance of retrieval results, the use of implicit feedback techniques helps by learning user interests unobtrusively. The main advantage is that users are relieved from providing feedback. A disadvantage is that information gathered using implicit techniques is less accurate than information based on explicit feedback. In this thesis, we focus on three main research questions. First of all, we study whether implicit relevance feedback, which is provided while interacting with a video retrieval system, can be employed to bridge the Semantic Gap. We therefore first identify implicit indicators of relevance by analysing representative video retrieval interfaces. Studying whether these indicators can be exploited as implicit feedback within short retrieval sessions, we recommend video documents based on implicit actions performed by a community of users. Secondly, implicit relevance feedback is studied as potential source to build user profiles and hence to identify users' long-term interests in specific topics. This includes studying the identification of different aspects of interests and storing these interests in dynamic user profiles. Finally, we study how this feedback can be exploited to adapt retrieval results or to recommend related videos that match the users' interests. We analyse our research questions by performing both simulation-based and user-centred evaluation studies. The results suggest that implicit relevance feedback can be employed in the video domain and that semantic-based user profiles have the potential to improve video exploration

    Increasing Engagement with the Library via Gamification

    Get PDF
    One of the main challenges faced by providers of interactive information access systems is to engage users in the use their systems. The library sector in particular can benefit significantly from increased user engagement. In this short paper, we present a preliminary analysis of a university library system that aims to trigger users' extrinsic motivation to increase their interaction with the system. Results suggest that different user groups react in different ways to such 'gamified' systems

    Third International Workshop on Gamification for Information Retrieval (GamifIR'16)

    Get PDF
    Stronger engagement and greater participation is often crucial to reach a goal or to solve an issue. Issues like the emerging employee engagement crisis, insufficient knowledge sharing, and chronic procrastination. In many cases we need and search for tools to beat procrastination or to change people’s habits. Gamification is the approach to learn from often fun, creative and engaging games. In principle, it is about understanding games and applying game design elements in a non-gaming environments. This offers possibilities for wide area improvements. For example more accurate work, better retention rates and more cost effective solutions by relating motivations for participating as more intrinsic than conventional methods. In the context of Information Retrieval (IR) it is not hard to imagine that many tasks could benefit from gamification techniques. Besides several manual annotation tasks of data sets for IR research, user participation is important in order to gather implicit or even explicit feedback to feed the algorithms. Gamification, however, comes with its own challenges and its adoption in IR is still in its infancy. Given the enormous response to the first and second GamifIR workshops that were both co-located with ECIR, and the broad range of topics discussed, we now organized the third workshop at SIGIR 2016 to address a range of emerging challenges and opportunities

    Real-Time Recommendation of Streamed Data

    Get PDF
    This tutorial addressed two trending topics in the field of recommender systems research, namely A/B testing and real-time recommendations of streamed data. Focusing on the news domain, participants learned how to benchmark the performance of stream-based recommendation algorithms in a live recommender system and in a simulated environment

    Detecting complex events in user-generated video using concept classifiers

    Get PDF
    Automatic detection of complex events in user-generated videos (UGV) is a challenging task due to its new characteristics differing from broadcast video. In this work, we firstly summarize the new characteristics of UGV, and then explore how to utilize concept classifiers to recognize complex events in UGV content. The method starts from manually selecting a variety of relevant concepts, followed byconstructing classifiers for these concepts. Finally, complex event detectors are learned by using the concatenated probabilistic scores of these concept classifiers as features. Further, we also compare three different fusion operations of probabilistic scores, namely Maximum, Average and Minimum fusion. Experimental results suggest that our method provides promising results. It also shows that Maximum fusion tends to give better performance for most complex events

    A visualization tool for violent scenes detection

    Get PDF
    We present a browser-based visualization tool that allows users to explore movies and online videos based on the violence level of these videos. The system offers visualizations of annotations and results of the MediaEval 2012 Affect Task and can interactively download and analyze content from video hosting sites like YouTube
    corecore