3 research outputs found

    Video object segmentation for interactive multimedia

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 67-74.Recently, trends in video processing research have shifted from video compression to video analysis, due to the emerging standards MPEG-4 and MPEG-7. These standards will enable the users to interact with the objects in the audiovisual scene generated at the user’s end. However, neither of them prescribes how to obtain the objects. Many methods have been proposed for segmentation of video objects. One of the approaches is the “Analysis Model” (AM) of European COST-211 project. It is a modular approach to video object segmentation problem. Although AM performs acceptably in some cases, the results in many other cases are not good enough to be considered as semantic objects. In this thesis, a new tool is integrated and some modules are replaced by improved versions. One of the tools uses a block-based motion estimation technique to analyze the motion content within a scene, computes a motion activity parameter, and skips frames accordingly. Also introduced is a powerful motion estimation method which uses maximum a posteriori probability (MAP) criterion and Gibbs energies to obtain more reliable motion vectors and to calculate temporally unpredictable areas. To handle more complex motion in the scene, the 2-D affine motion model is added to the motion segmentation module, which employs only the translational model. The observed results indicate that the AM performance is improved substantially. The objects in the scene and their boundaries are detected more accurately, compared to the previous results.Ekmekçi, TolgaM.S

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords
    corecore