17 research outputs found

    Dialogue scene detection in movies using low and mid-level visual features

    Get PDF
    This paper describes an approach for detecting dialogue scenes in movies. The approach uses automatically extracted low- and mid-level visual features that characterise the visual content of individual shots, and which are then combined using a state transition machine that models the shot-level temporal characteristics of the scene under investigation. The choice of visual features used is motivated by a consideration of formal film syntax. The system is designed so that the analysis may be applied in order to detect different types of scenes, although in this paper we focus on dialogue sequences as these are the most prevalent scenes in the movies considered to date

    Identifying Video Content Consistency by Vector Quantization

    Get PDF
    Many post-production videos such as movies and cartoons present well structured story-lines organized in separated visual scenes. Accurate grouping of shots into these logical segments could lead to semantic indexing of scenes for interactive multimedia retrieval and video summaries. In this paper we introduce a novel shot-based analysis approach which aims to cluster together shots with similar visual content. We demonstrate how the use of codebooks of visual codewords (generated by a vector quantization process) represents an effective method to identify clusters containing shots with similar long-term consistency of chromatic compositions. The clusters, obtained by a single-link clustering algorithm, allow the further use of the well-known scene transition graph framework for logical story unit detection and pattern investigation

    Neighborhood coherence and edge based approaches to film scene extraction

    Full text link
    In order to enable high-level semantics-based video annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from film production to determine when a scene change occurs in film. We examine different rules and conventions followed as part of Film Grammar to guide and shape our algorithmic solution for determining a scene boundary. Two different techniques are proposed as new solutions in this paper. Our experimental results on 10 full-length movies show that our technique based on shot sequence coherence performs well and reasonably better than the color edges-based approach

    A new audio-visual analysis approach and tools for parsing colonoscopy videos

    Get PDF
    Colonoscopy is an important screening tool for colorectal cancer. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. We call videos captured from colonoscopic procedures colonoscopy videos. Because these videos possess unique characteristics, new types of semantic units and parsing techniques are required. In this paper, we introduce a new analysis approach that includes (a) a new definition of semantic unit - scene (a segment of visual and audio data that correspond to an endoscopic segment of the colon); (b) a novel scene segmentation algorithm using audio and visual analysis to recognize scene boundaries. We design a prototype system to implement the proposed approach. This system also provides the tools for video/image browsing. The tools enable the users to quickly locate and browse scenes of interest. Experiments on real colonoscopy videos show the effectiveness of our algorithms. The proposed techniques and software are useful (1) for post-procedure reviews, (2) for developing an effective content-based retrieval system for colonoscopy videos to facilitate endoscopic research and education, and (3) for development of a systematic approach to assess endoscopists\u27 procedural skills

    Verfahren zur Inhaltsadaption von Darstellungselementen

    Get PDF
    Dieser Bericht gibt einen Überblick ĂŒber bekannte Verfahren und Technologien zur automatischen Adaption von Darstellungselementen fĂŒr mobile EndgerĂ€te, wobei der Schwerpunkt bei Verfahren zur Adaption von Bildern, Videos, Webseiten und Audiodateien liegt. Ziel ist es, basierend auf den Eigenschaften des EndgerĂ€tes und den Interaktionsmöglichkeiten, geeignete Darstellungsformate automatisch abzuleiten. Als mögliche EndgerĂ€te werden Mobiltelefone, PDAs, Tablet PCs und Notebook PCs betrachtet. Ein guter Adaptionsalgorithmus sollte eine computergestĂŒtzte Umformatierung von nur einmal bereit gestellten Inhalten fĂŒr die verschiedenen Formfaktoren, Auflösungen, BildschirmgrĂ¶ĂŸen, Interaktionstechniken (Maus, Stift, Touch-Screen usw.) und Netzbandbreiten unterstĂŒtzen

    An Overview of Video Shot Clustering and Summarization Techniques for Mobile Applications

    Get PDF
    The problem of content characterization of video programmes is of great interest because video appeals to large audiences and its efficient distribution over various networks should contribute to widespread usage of multimedia services. In this paper we analyze several techniques proposed in literature for content characterization of video programmes, including movies and sports, that could be helpful for mobile media consumption. In particular we focus our analysis on shot clustering methods and effective video summarization techniques since, in the current video analysis scenario, they facilitate the access to the content and help in quick understanding of the associated semantics. First we consider the shot clustering techniques based on low-level features, using visual, audio and motion information, even combined in a multi-modal fashion. Then we concentrate on summarization techniques, such as static storyboards, dynamic video skimming and the extraction of sport highlights. Discussed summarization methods can be employed in the development of tools that would be greatly useful to most mobile users: in fact these algorithms automatically shorten the original video while preserving most events by highlighting only the important content. The effectiveness of each approach has been analyzed, showing that it mainly depends on the kind of video programme it relates to, and the type of summary or highlights we are focusing on

    Audio-Visual VQ Shot Clustering for Video Programs

    Get PDF
    Many post-production video documents such as movies, sitcoms and cartoons present well structured story-lines organized in separated audio-visual scenes. Accurate grouping of shots into these logical video segments could lead to semantic indexing of scenes and events for interactive multimedia retrieval. In this paper we introduce a novel shot based analysis approach which aims to cluster together shots with similar audio-visual content. We demonstrate how the use of codebooks of audio and visual codewords (generated by a vector quantization process) results to be an effective method to represent clusters containing shots with similar long-term consistency of chromatic compositions and audio. The output clusters obtained by a simple single-link clustering algorithm, allow the further application of the well-known scene transition graph framework for scene change detection and shot-pattern investigation. In the end the merging of audio and visual results leads to a hierarchical description of the whole video document, useful for multimedia retrieval and summarization purposes

    Scene extraction in motion pictures

    Full text link
    This paper addresses the challenge of bridging the semantic gap between the rich meaning users desire when they query to locate and browse media and the shallowness of media descriptions that can be computed in today\u27s content management systems. To facilitate high-level semantics-based content annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from fill production to determine when a scene change occurs. We then investigate different rules and conventions followed as part of Fill Grammar that would guide and shape an algorithmic solution for determining a scene. Two different techniques using intershot analysis are proposed as solutions in this paper. In addition, we present different refinement mechanisms, such as film-punctuation detection founded on Film Grammar, to further improve the results. These refinement techniques demonstrate significant improvements in overall performance. Furthermore, we analyze errors in the context of film-production techniques, which offer useful insights into the limitations of our method

    Toward automatic extraction of expressive elements from motion pictures : tempo

    Full text link
    This paper addresses the challenge of bridging the semantic gap that exists between the simplicity of features that can be currently computed in automated content indexing systems and the richness of semantics in user queries posed for media search and retrieval. It proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high-level semantics of stories portrayed, thus enabling rich video annotation and interpretation. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step toward demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for a number of full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful high-level semantic construct in its own right and a promising component of others such as rhythm, tone or mood of a film. In addition to the development of this computable tempo measure, a study is conducted as to the usefulness of biasing it toward either of its constituents, namely, motion or shot length. Finally, a refinement is made to the shot length normalizing mechanism, driven by the peculiar characteristics of shot length distribution exhibited by movies. Results of these additional studies, and possible applications and limitations are discussed

    Dialog detection in narrative video by shot and face analysis

    Full text link
    corecore