24,622 research outputs found

    Video object segmentation for future multimedia applications

    Get PDF
    An efficient representation of two-dimensional visual objects is specified by an emerging audiovisual compression standard known as MPEG-4. It incorporates the advantages of segmentation-based video compression (whereby objects are encoded independently, facilitating content-based functionalities), and also the advantages of more traditional block-based approaches (such as low delay and compression efficiency). What is not specified, however, is the method of extracting semantic objects from a scene corresponding to a video segmentation task. An accurate, robust and flexible solution to this is essential to enable the future multimedia applications possible with MPEG-4. Two categories of video segmentation approaches can be identified: supervised and unsupervised. A representative set of unsupervised approaches is discussed. These approaches are found to be suitable for real-time MPEG-4 applications. However, they are not suitable for off-line applications which require very accurate segmentations of entire semantic objects. This is because an automatic segmentation process cannot solve the ill-posed problem of extracting semantic meaning from a scene. Supervised segmentation incorporates user interaction so that semantic objects in a scene can be defined. A representative set of supervised approaches with greater or lesser degrees of interaction is discussed. Three new approaches to the problem, each more sophisticated than the last, are presented by the author. The most sophisticated is an object-based approach in which an automatic segmentation and tracking algorithm is used to perform a segmentation of a scene in terms of the semantic objects defined by the user. The approach relies on maximum likelihood estimation of the parameters of mixtures of multimodal multivariate probability distribution functions. The approach is an enhanced and modified version of an existing approach yielding more sophisticated object modelling. The segmentation results obtained are comparable to those of existing approaches and in many cases better. It is concluded that the author’s approach is ideal as a content extraction tool for future off-line MPEG-4 applications

    Segmentation and tracking of video objects for a content-based video indexing context

    Get PDF
    This paper examines the problem of segmentation and tracking of video objects for content-based information retrieval. Segmentation and tracking of video objects plays an important role in index creation and user request definition steps. The object is initially selected using a semi-automatic approach. For this purpose, a user-based selection is required to define roughly the object to be tracked. In this paper, we propose two different methods to allow an accurate contour definition from the user selection. The first one is based on an active contour model which progressively refines the selection by fitting the natural edges of the object while the second used a binary partition tree with aPeer ReviewedPostprint (published version

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)

    A video object generation tool allowing friendly user interaction

    Get PDF
    In this paper we describe an interactive video object segmentation tool developed in the framework of the ACTS-AC098 MOMUSYS project. The Video Object Generator with User Environment (VOGUE) combines three different sets of automatic and semi-automatic-tool (spatial segmentation, object tracking and temporal segmentation) with general purpose tools for user interaction. The result is an integrated environment allowing the user-assisted segmentation of any sort of video sequences in a friendly and efficient manner.Peer ReviewedPostprint (published version

    User-interface to a CCTV video search system

    Get PDF
    The proliferation of CCTV surveillance systems creates a problem of how to effectively navigate and search the resulting video archive, in a variety of security scenarios. We are concerned here with a situation where a searcher must locate all occurrences of a given person or object within a specified timeframe and with constraints on which camera(s) footage is valid to search. Conventional approaches based on browsing time/camera based combinations are inadequate. We advocate using automatically detected video objects as a basis for search, linking and browsing. In this paper we present a system under development based on users interacting with detected video objects. We outline the suite of technologies needed to achieve such a system and for each we describe where we are in terms of realizing those technologies. We also present a system interface to this system, designed with user needs and user tasks in mind

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1
    • 

    corecore