4,414 research outputs found

    Detection of setting and subject information in documentary video

    Full text link
    Interpretation of video information is a difficult task for computer vision and machine intelligence. In this paper we examine the utility of a non-image based source of information about video contents, namely the shot list, and study its use in aiding image interpretation. We show how the shot list may be analysed to produce a simple summary of the \u27who and where\u27 of a documentary or interview video. In order to detect the subject of a video we use the notion of a \u27shot syntax\u27 of a particular genre to isolate actual interview sections

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Toward automatic extraction of expressive elements from motion pictures : tempo

    Full text link
    This paper addresses the challenge of bridging the semantic gap that exists between the simplicity of features that can be currently computed in automated content indexing systems and the richness of semantics in user queries posed for media search and retrieval. It proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high-level semantics of stories portrayed, thus enabling rich video annotation and interpretation. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step toward demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for a number of full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful high-level semantic construct in its own right and a promising component of others such as rhythm, tone or mood of a film. In addition to the development of this computable tempo measure, a study is conducted as to the usefulness of biasing it toward either of its constituents, namely, motion or shot length. Finally, a refinement is made to the shot length normalizing mechanism, driven by the peculiar characteristics of shot length distribution exhibited by movies. Results of these additional studies, and possible applications and limitations are discussed

    Smart Video Text: An Intelligent Video Database System

    Get PDF

    STRG-QL: Spatio-Temporal Region Graph Query Language for Video Databases

    Get PDF
    Copyright 2008 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.In this paper, we present a new graph-based query language and its query processing for a Graph-based Video Database Management System (GVDBMS). Although extensive researches have proposed various query languages for video databases, most of them have the limitation in handling general-purpose video queries. Each method can handle specific data model, query type or application. In order to develop a general-purpose video query language, we first produce Spatio-Temporal Region Graph (STRG) for each video, which represents spatial and temporal information of video objects. An STRG data model is generated from the STRG by exploiting object-oriented model. Based on the STRG data model, we propose a new graph-based query language named STRG-QL, which supports various types of video query. To process the proposed STRG-QL, we introduce a rule-based query optimization that considers the characteristics of video data, i.e., the hierarchical correlations among video segments. The results of our extensive experimental study show that the proposed STRG-QL is promising in terms of accuracy and cost.http://dx.doi.org/10.1117/12.76553

    Multimedia Retrieval

    Get PDF

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords
    corecore