58 research outputs found

    The MoCA Workbench: Support for Creativity in Movie Content Analysis

    Full text link
    Semantic access to the content of a video is highly desirable for multimedia content retrieval. Automatic extraction of semantics requires content analysis algorithms. Our MoCA (Movie Content Analysis) project provides an interactive workbench supporting the researcher in the development of new movie content analysis algorithms. The workbench offers data management facilities for large amounts of video/audio data and derived parameters. It also provides an easy-to-use interface for a free combination of basic operators into more sophisticated operators. We can combine results from video track and audio track analysis. The MoCA Workbench shields the researcher from technical details and provides advanced visualization capabilities, allowing attention to focus on the development of new algorithms. The paper presents the design and implementation of the MoCA Workbench and reports practical experience

    Automatic Audio Content Analysis

    Full text link
    This paper describes the theoretic framework and applications of automatic audio content analysis. Research in multimedia content analysis has so far concentrated on the video domain. We demonstrate the strength of automatic audio content analysis. We explain the algorithms we use, including analysis of amplitude, frequency and pitch, and simulations of human audio perception. These algorithms serve us as tools for further audio content analysis. We use these tools in applications like the segmentation of audio data streams into logical units for further processing, the analysis of music, as well as the recognition of sounds indicative of violence like shots, explosions and cries

    Automatic Movie Abstracting

    Full text link
    Presented is an algorithm for automatic production of a video abstract of a feature film, similar to a movietrailer. It selects clips from the original movie based on detection of special events like dialogs, shots, explosions and text occurrences, and on general action indicators applied to scenes. These clips are then assembled to form a video trailer using a model of editing. Additional clips, audio pieces, images and text, which are also retrieved from the original video for their content, are added to produce a multimedia abstract. The collection of multime dia objects is presented on an HTML-page

    Verbesserung der Qualität von historischen Filmen

    Get PDF
    Historische Filme stellen einen wichtigen Baustein für die Erhaltung des kulturellen Erbes dar. Durch eine Digitalisierung können diese für die Zukunft erhalten werden, ohne dass Filme durch Materialermüdung der Filmrollen bzw. Bänder Schaden nehmen. Viele der historischen Aufnahmen sind durch Abspielen oder Lagerung bereits deutlich beschädigt. In diesem Bericht werden Algorithmen zur Erkennung und Behebung solcher Fehler in historischen Schwarzweißfilmen vorgestellt. Dabei handelt es sich um die Erkennung und Beseitigung von horizontalen Störlinien, um die Helligkeits- und Kontrastkorrektur sowohl bei starken Helligkeitsschwankungen als auch bei überdunkelten oder überhellten Sequenzen, sowie um die Entfernung von Verwackelungen bei Kameraeinstellungen

    A novel optical flow-based representation for temporal video segmentation

    Get PDF
    Temporal video segmentation is a field of multimedia research enabling us to temporally split video data into semantically coherent scenes. In order to develop methods challenging temporal video segmentation, detecting scene boundaries is one of the more widely used approaches. As a result, representation of temporal information becomes important. We propose a new temporal video segment representation to formalize video scenes as a sequence of temporal motion change information. The idea here is that some sort of change in the optical flow character determines motion change and cuts between consecutive scenes. The problem is eventually reduced to an optical flow-based cut detection problem from which the average motion vector concept is put forward. This concept is used for proposing a pixel-based representation enriched with a novel motion-based approach. Temporal video segment points are classified as cuts and noncuts according to the proposed video segment representation. Consequently, the proposed method and representation is applied to benchmark data sets and the results are compared to other state-of-the art methods

    Video Categorization Using Data Mining

    Get PDF
    Video categorization using data mining is the area of the research that aims to propose adeveloped method based on Artificial Neural Network (ANN), which could be used to classify video files into different categories according to the content. In order to test this method, the classifications of video files are discussed. The applied system proposes that the video could be categorized in two classes. The first one is educational while is noneducational. The classification is conducted based on the motion using optical flow. Several experiments were conducted using Artificial Neural Network (ANN) model. The research facilitate access to the required educational video to the learners students, especially novice students. This research objective is to investigate how the effect of motion feature can be useful in such lassification. We believe that other effects such audio features, text features, and other factors can enhance accuracy, but this requires wider studies and need more time. The accuracy of results in video classification to educational and non-educational through technique 3 fold cross validation and using (ANN) model is 54%. This result may can be improved by introducing other factors mentioned above

    VisualGREP : a systematic method to compare and retrieve video sequences

    Get PDF
    In this paper, we consider the problem of similarity between video sequences. Three basic questions are raised and (partially) answered. Firstly, at what temporal duration can video sequences be compared? The frame, shot, scene and video levels are identified. Secondly, given some image or video feature, what are the requirements on its distance measure and how can it be "easily" transformed into the visual similarity desired by the inquirer? Thirdly, how can video sequences be compared at different levels? A general approach based on either a set or sequence representation with variable degrees of aggregation is proposed and applied recursively over the different levels of temporal resolution: It allows the inquirer to fully control the importance of temporal ordering and duration. The general approach is illustrated by introducing and discussing some of the many possible image and video features. Promising experimental results are presented

    Automatic Generation of Video Summaries for Historical Films

    Full text link
    A video summary is a sequence of video clips extracted from a longer video. Much shorter than the original, the summary preserves its essential messages. In the project ECHO (European Chronicles On-line) a system was developed to store and manage large collections of historical films for the preservation of cultural heritage. At the University of Mannheim we have developed the video summarization component of the ECHO system. In this paper we discuss the particular challenges the historical film material poses, and how we have designed new video processing algorithms and modified existing ones to cope with noisy black-and-white films. We also report empirical results from the use of our summarization tool at the four major European national video archives

    Feature Based Cut Detection with Automatic Threshold Selection

    Get PDF
    There has been much work concentrated on creating accurate shot boundary detection algorithms in recent years. However a truly accurate method of cut detection still eludes researchers in general. In this work we present a scheme based on stable feature tracking for inter frame differencing. Furthermore, we present a method to stabilize the differences and automatically detect a global threshold to achieve a high detection rate. We compare our scheme against other cut detection techniques on a variety of data sources that have been specifically selected because of the difficulties they present due to quick motion, highly edited sequences and computer-generated effects

    Computer-assisted text analysis methodology in the social sciences

    Full text link
    "This report presents an account of methods of research in computer-assisted text analysis in the social sciences. Rather than to provide a comprehensive enumeration of all computer-assisted text analysis investigations either directly or indirectly related to the social sciences using a quantitative and computer-assisted methodology as their text analytical tool, the aim of this report is to describe the current methodological standpoint of computer-assisted text analysis in the social sciences. This report provides, thus, a description and a discussion of the operations carried out in computer-assisted text analysis investigations. The report examines both past and well-established as well as some of the current approaches in the field and describes the techniques and the procedures involved. By this means, a first attempt is made toward cataloguing the kinds of supplementary information as well as computational support which are further required to expand the suitability and applicability of the method for the variety of text analysis goals." (author's abstract
    corecore