497 research outputs found

    Personalized retrieval of sports video

    Full text link

    Multimedia Retrieval

    Get PDF

    An investigation into weighted data fusion for content-based multimedia information retrieval

    Get PDF
    Content Based Multimedia Information Retrieval (CBMIR) is characterised by the combination of noisy sources of information which, in unison, are able to achieve strong performance. In this thesis we focus on the combination of ranked results from the independent retrieval experts which comprise a CBMIR system through linearly weighted data fusion. The independent retrieval experts are low-level multimedia features, each of which contains an indexing function and ranking algorithm. This thesis is comprised of two halves. In the first half, we perform a rigorous empirical investigation into the factors which impact upon performance in linearly weighted data fusion. In the second half, we leverage these finding to create a new class of weight generation algorithms for data fusion which are capable of determining weights at query-time, such that the weights are topic dependent

    An audio-visual approach to web video categorization

    Get PDF
    International audienceIn this paper we address the issue of automatic video genre categorization of web media using an audio-visual approach. To this end, we propose content descriptors which exploit audio, temporal structure and color information. The potential of our descriptors is experimentally validated both from the perspective of a classification system and as an information retrieval approach. Validation is carried out on a real scenario, namely on more than 288 hours of video footage and 26 video genres specific to blip.tv media platform. Additionally, to reduce semantic gap, we propose a new relevance feedback technique which is based on hierarchical clustering. Experimental tests prove that retrieval performance can be significantly increased in this case, becoming comparable to the one obtained with high level semantic textual descriptors

    Many-screen viewing: collaborative consumption of television media across multiple devices

    Get PDF
    The landscape of television is changing. Modern Internet enabled sets are now capable computing devices offering new forms of connectivity and interaction to viewers. One development enabled by this transition is the distribution of auxiliary content to a portable computing device, such as a mobile phone or tablet, working in concert with the television. These configurations are enabled by second screen applications that provide relevant content in synchronisation with the programme on a nearby television set. This thesis extends the notion of second screen to arrangements that incorporate multiple mobile devices working with the television, utilised by collocated groups of participants. Herein these arrangements are referred to as ‘many-screen’ television. Two many-screen applications were developed for the augmentation of sports programming in preparation of this thesis; the Olympic Companion and MarathOn Multiscreen Applications. Both of these applications were informed by background literature on second screen television and wider issues in HCI multiscreen research. In addition, the design of both applications was inspired by the needs of traditional and online broadcasters, through an internship with BBC Research and Development and involvement in a YouTube sponsored project. Both the applications were evaluated by collocated groups of users in formative user studies. These studies centred on how users share and organise what to watch, incorporate activity within the traditionally passive television viewing experience and the integration of user-generated video content in a many-screen system. The primary contribution of this thesis is a series of industry validated guidelines for the design of many-screen applications. The guidelines highlight issues around user awareness devices, content and other user’s actions, the balance between communal and private viewing and the appropriation of user-generated content in many-screen watching

    Many-screen viewing: collaborative consumption of television media across multiple devices

    Get PDF
    The landscape of television is changing. Modern Internet enabled sets are now capable computing devices offering new forms of connectivity and interaction to viewers. One development enabled by this transition is the distribution of auxiliary content to a portable computing device, such as a mobile phone or tablet, working in concert with the television. These configurations are enabled by second screen applications that provide relevant content in synchronisation with the programme on a nearby television set. This thesis extends the notion of second screen to arrangements that incorporate multiple mobile devices working with the television, utilised by collocated groups of participants. Herein these arrangements are referred to as ‘many-screen’ television. Two many-screen applications were developed for the augmentation of sports programming in preparation of this thesis; the Olympic Companion and MarathOn Multiscreen Applications. Both of these applications were informed by background literature on second screen television and wider issues in HCI multiscreen research. In addition, the design of both applications was inspired by the needs of traditional and online broadcasters, through an internship with BBC Research and Development and involvement in a YouTube sponsored project. Both the applications were evaluated by collocated groups of users in formative user studies. These studies centred on how users share and organise what to watch, incorporate activity within the traditionally passive television viewing experience and the integration of user-generated video content in a many-screen system. The primary contribution of this thesis is a series of industry validated guidelines for the design of many-screen applications. The guidelines highlight issues around user awareness devices, content and other user’s actions, the balance between communal and private viewing and the appropriation of user-generated content in many-screen watching

    Novelty detection in video retrieval: finding new news in TV news stories

    Get PDF
    Novelty detection is defined as the detection of documents that provide "new" or previously unseen information. "New information" in a search result list is defined as the incremental information found in a document based on what the user has already learned from reviewing previous documents in a given ranked list of documents. It is assumed that, as a user views a list of documents, their information need changes or evolves, and their state of knowledge increases as they gain new information from the documents they see. The automatic detection of "novelty" , or newness, as part of an information retrieval system could greatly improve a searcher’s experience by presenting "documents" in order of how much extra information they add to what is already known, instead of how similar they are to a user’s query. This could be particularly useful in applications such as the search of broadcast news and automatic summary generation. There are many different aspects of information management, however, this thesis, presents research into the area of novelty detection within the content based video domain. It explores the benefits of integrating the many multi modal resources associated with video content those of low level feature detection evidences such as colour and edge, automatic concepts detections such as face, commercials, and anchor person, automatic speech recognition transcripts and manually annotated MPEG7 concepts into a novelty detection model. The effectiveness of this novelty detection model is evaluated on a collection of TV new data
    corecore