17,890 research outputs found

    A video object generation tool allowing friendly user interaction

    Get PDF
    In this paper we describe an interactive video object segmentation tool developed in the framework of the ACTS-AC098 MOMUSYS project. The Video Object Generator with User Environment (VOGUE) combines three different sets of automatic and semi-automatic-tool (spatial segmentation, object tracking and temporal segmentation) with general purpose tools for user interaction. The result is an integrated environment allowing the user-assisted segmentation of any sort of video sequences in a friendly and efficient manner.Peer ReviewedPostprint (published version

    A Lvq-Based Temporal Tracking for Semi-Automatic Video Object Segmentation

    Get PDF
    This paper presents a Learning Vector Quantization (LVQ)-based temporal tracking method for semi-automatic video object segmentation. A semantic video object is initialized using user assistance in a reference frame to give initial classification of video object and its background regions. The LVQ training approximates video object and background classification and use them for automatic segmentation of the video object on the following frames thus performing temporal tracking. For LVQ training input, we sampling each pixel of a video frame as a 5-dimensional vector combining 2-dimensional pixel position (X,Y) and 3-dimensional HSV color space. This paper also demonstrates experiments using some MPEG-4 standard test video sequences to evaluate the accuracy of the proposed method

    Segmentation and tracking of video objects for a content-based video indexing context

    Get PDF
    This paper examines the problem of segmentation and tracking of video objects for content-based information retrieval. Segmentation and tracking of video objects plays an important role in index creation and user request definition steps. The object is initially selected using a semi-automatic approach. For this purpose, a user-based selection is required to define roughly the object to be tracked. In this paper, we propose two different methods to allow an accurate contour definition from the user selection. The first one is based on an active contour model which progressively refines the selection by fitting the natural edges of the object while the second used a binary partition tree with aPeer ReviewedPostprint (published version

    Practical Uses of A Semi-automatic Video Object Extraction System

    Get PDF
    Object-based technology is important for computer vision applications including gesture understanding, image recognition, augmented reality, etc. However, extracting the shape information of semantic objects from video sequences is a very difficult task, since this information is not explicitly provided within the video data. Therefore, an application for exttracting the semantic video object is indispensable and important for many advanced applications. An algorithm for semi-automatic video object extraction system has been developed. The performance measures of video object extraction system; including evaluation using ground truth and error metric is shown, followed by some practical uses of our video object extraction system. The principle at the basis of semi-automatic object extraction technique is the interaction of the user during some stages of the segmentation process, whereby the semantic information is provided directly by the user. After the user provides the initial segmentation of the semantic video objects, a tracking mechanism follows its temporal transformation in the subsequent frames, thus propagating the semantic information. Since the tracking tends to introduce boundary errors, the semantic information can be refreshed by the user at certain key frame locations in the video sequence. The tracking mechanism can also operate in forward or backward direction of the video sequence. The performance analysis of the results is described using single and multiple key frames; Mean Error and “Last_Error”, and also forward and backward extraction. To achieve best performance, results from forward and backward extraction can be merged

    Semi-automatic video object segmentation for multimedia applications

    Get PDF
    A semi-automatic video object segmentation tool is presented for segmenting both still pictures and image sequences. The approach comprises both automatic segmentation algorithms and manual user interaction. The still image segmentation component is comprised of a conventional spatial segmentation algorithm (Recursive Shortest Spanning Tree (RSST)), a hierarchical segmentation representation method (Binary Partition Tree (BPT)), and user interaction. An initial segmentation partition of homogeneous regions is created using RSST. The BPT technique is then used to merge these regions and hierarchically represent the segmentation in a binary tree. The semantic objects are then manually built by selectively clicking on image regions. A video object-tracking component enables image sequence segmentation, and this subsystem is based on motion estimation, spatial segmentation, object projection, region classification, and user interaction. The motion between the previous frame and the current frame is estimated, and the previous object is then projected onto the current partition. A region classification technique is used to determine which regions in the current partition belong to the projected object. User interaction is allowed for object re-initialisation when the segmentation results become inaccurate. The combination of all these components enables offline video sequence segmentation. The results presented on standard test sequences illustrate the potential use of this system for object-based coding and representation of multimedia

    Semi-automatic video object segmentation

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent Univ., 2000.Thesis (Master's) -- Bilkent University, 2000.Includes bibliographical references leaves 70-74Content-based iunetionalities form the core of the future multimedia applications. The new multimedia standard MPEG-4 provides a new form of interactivity with coded audio-visual data. The emerging standard MPEG-7 specifies a common description of various types of multimedia information to index the data for storage and retrieval. However, none of these standards specifies how to extract the content of the multimedia data. Video object segmentation addresses this task and tries to extract semantic objects from a scene. Two tyj)es of video object segmentation can be identified: unsupervised and supervised. In unsupervised méthods the user is not involved in any step of the process. In supervised methods the user is requested to supply additional information to increase the quality of the segmentation. The proposed weakly supervised still image segmentation asks the user to draw a scribble over what he defines as an object. These scribbles inititate the iterative method. .A.t each iteration the most similar regions are merged until the desired numljer of regions is reached. The proposed .segmentation method is inserted into the unsupervised COST211ter .A-ualysis Model (.A.M) for video object segmentation. The AM is modified to handh' the sujiervision. The new semi-automatic AM requires the user intei actimi for onl>· first frame of the video, then segmentation and object tracking is doin' automatically. The results indicate that the new semi-automatic AM constituK's a good tool for video oliject segmentation.Esen, ErsinM.S

    QIMERA: a software platform for video object segmentation and tracking

    Get PDF
    In this paper we present an overview of an ongoing collaborative project in the field of video object segmentation and tracking. The objective of the project is to develop a flexible modular software architecture that can be used as test-bed for segmentation algorithms. The background to the project is described, as is the first version of the software system itself. Some sample results for the first segmentation algorithm developed using the system are presented and directions for future work are discussed
    • 

    corecore