4,362 research outputs found

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    A Dataset for Movie Description

    Full text link
    Descriptive video service (DVS) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed DVS, which is temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the Movie Description dataset contains a parallel corpus of over 54,000 sentences and video snippets from 72 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing DVS to scripts, we find that DVS is far more visual and describes precisely what is shown rather than what should happen according to the scripts created prior to movie production

    GECKA3D: A 3D Game Engine for Commonsense Knowledge Acquisition

    Get PDF
    Commonsense knowledge representation and reasoning is key for tasks such as artificial intelligence and natural language understanding. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. In this paper, we introduce a novel 3D game engine for commonsense knowledge acquisition (GECKA3D) which aims to collect commonsense from game designers through the development of serious games. GECKA3D integrates the potential of serious games and games with a purpose. This provides a platform for the acquisition of re-usable and multi-purpose knowledge, and also enables the development of games that can provide entertainment value and teach players something meaningful about the actual world they live in

    Crowdsourcing Emotions in Music Domain

    Get PDF
    An important source of intelligence for music emotion recognition today comes from user-provided community tags about songs or artists. Recent crowdsourcing approaches such as harvesting social tags, design of collaborative games and web services or the use of Mechanical Turk, are becoming popular in the literature. They provide a cheap, quick and efficient method, contrary to professional labeling of songs which is expensive and does not scale for creating large datasets. In this paper we discuss the viability of various crowdsourcing instruments providing examples from research works. We also share our own experience, illustrating the steps we followed using tags collected from Last.fm for the creation of two music mood datasets which are rendered public. While processing affect tags of Last.fm, we observed that they tend to be biased towards positive emotions; the resulting dataset thus contain more positive songs than negative ones

    SportsAnno: what do you think?

    Get PDF
    The automatic summarisation of sports video is of growing importance with the increased availability of on-demand content. Consumers who are unable to view events live often have a desire to watch a summary which allows then to quickly come to terms with all that has happened during a sporting event. Sports forums show that it is not only summaries that are desirable but also the opportunity to share one’s own point of view and discuss the opinions with a community of similar users. In this paper we give an overview of the ways in which annotations have been used to augment existing visual media. We present SportsAnno, a system developed to summarise World Cup 2006 matches and provide a means for open discussion of events within these matches

    Semantic annotation of digital music

    Get PDF
    AbstractIn recent times, digital music items on the internet have been evolving in a vast information space where consumers try to find/locate the piece of music of their choice by means of search engines. The current trend of searching for music by means of music consumersʼ keywords/tags is unable to provide satisfactory search results. It is argued that search and retrieval of music can be significantly improved provided end-usersʼ tags are associated with semantic information in terms of acoustic metadata – the latter being easy to extract automatically from digital music items. This paper presents a lightweight ontology that will enable music producers to annotate music against MPEG-7 description (with its acoustic metadata) and the generated annotation may in turn be used to deliver meaningful search results. Several potential multimedia ontologies have been explored and a music annotation ontology, named mpeg-7Music, has been designed so that it can be used as a backbone for annotating music items
    corecore