523 research outputs found

    On the effect of motion segmentation techniques in description based adaptive video transmission

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "On the effect of motion segmentation techniques in description based adaptive video transmission", in AVSS '07: Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, 2007, p. 359-364This paper presents the results of analysing the effect of different motion segmentation techniques in a system that transmits the information captured by a static surveillance camera in an adaptative way based on the on-line generation of descriptions and their descriptions at different levels of detail. The video sequences are analyzed to detect the regions of activity (motion analysis) and to differentiate them from the background, and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of the moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background. We study the effect of three motion segmentation algorithms in several aspects such as accurate segmentation, size of the descriptions generated, computational efficiency and reconstructed data quality.This work is partially supported by Cátedra Infoglobal-UAM para Nuevas Tecnologías de video aplicadas a la seguridad. This work is also supported by the Ministerio de Ciencia y Tecnología of the Spanish Government under project TIN2004-07860 (MEDUSA) and by the Comunidad de Madrid under project P-TIC-0223-0505 (PROMULTIDIS)

    Texturing of Surface of 3D Human Head Model

    Get PDF
    The paper deals with an algorithm of texturing of the surface of 3D human head model. The proposed algorithm generates a texture consequently of several camera frames of the input video sequence. The texture values from the camera frames are mapped on the surface of the 3D human head model using perspective projection, scan line and 3D motion estimation. To decrease the number of camera frames a filling of empty places by a simple interpolation method has been done in the texture plane

    On-line adaptive video sequence transmission based on generation and transmisiĂłn of descriptions

    Full text link
    Proceedings of the 26th Picture Coding Symposium, PCS 2007, Lisbon, Portugal, November 2007This paper presents a system to transmit the information from a static surveillance camera in an adaptive way, from low to higher bit-rate, based on the on-line generation of descriptions. The proposed system is based on a server/client model: the server is placed in the surveillance area and the client is placed in a user side. The server analyzes the video sequence to detect the regions of activity (motion analysis) and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background.This work is partially supported by Cátedra Infoglobal-UAM para Nuevas Tecnologías de video aplicadas a la seguridad. This work is also supported by the Ministerio de Ciencia y Tecnología of the Spanish Government under project TIN2004-07860 (MEDUSA) and by the Comunidad de Madrid under project P-TIC-0223-0505 (PROMULTIDIS)

    Perceived quality of DIBR-based synthesized views

    Get PDF
    International audienceThis paper considers the reliability of usual assessment methods when evaluating virtual synthesized views in the multi-view video context. Virtual views are generated from Depth Image Based Rendering (DIBR) algorithms. Because DIBR algorithms involve geometric transformations, new types of artifacts come up. The question regards the ability of commonly used methods to deal with such artifacts. This paper investigates how correlated usual metrics are to human judgment. The experiments consist in assessing seven different view synthesis algorithms by subjective and objective methods. Three different 3D video sequences are used in the tests. Resulting virtual synthesized sequences are assessed through objective metrics and subjective protocols. Results show that usual objective metrics can fail assessing synthesized views, in the sense of human judgment

    Representation and coding of 3D video data

    Get PDF
    Livrable D4.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.1 du projet

    Disparity-compensated view synthesis for s3D content correction

    Get PDF
    International audienceThe production of stereoscopic 3D HD content is considerably increasing and experience in 2-view acquisition is in progress. High quality material to the audience is required but not always ensured, and correction of the stereo views may be required. This is done via disparity-compensated view synthesis. A robust method has been developed dealing with these acquisition problems that introduce discomfort (e.g hyperdivergence and hyperconvergence...) as well as those ones that may disrupt the correction itself (vertical disparity, color difference between views...). The method has three phases: a preprocessing in order to correct the stereo images and estimate features (e.g. disparity range...) over the sequence. The second (main) phase proceeds then to disparity estimation and view synthesis. Dual disparity estimation based on robust block-matching, discontinuity-preserving filtering, consistency and occlusion handling has been developed. Accurate view synthesis is carried out through disparity compensation. Disparity assessment has been introduced in order to detect and quantify errors. A post-processing deals with these errors as a fallback mode. The paper focuses on disparity estimation and view synthesis of HD images. Quality assessment of synthesized views on a large set of HD video data has proved the effectiveness of our method

    MASCOT : metadata for advanced scalable video coding tools : final report

    Get PDF
    The goal of the MASCOT project was to develop new video coding schemes and tools that provide both an increased coding efficiency as well as extended scalability features compared to technology that was available at the beginning of the project. Towards that goal the following tools would be used: - metadata-based coding tools; - new spatiotemporal decompositions; - new prediction schemes. Although the initial goal was to develop one single codec architecture that was able to combine all new coding tools that were foreseen when the project was formulated, it became clear that this would limit the selection of the new tools. Therefore the consortium decided to develop two codec frameworks within the project, a standard hybrid DCT-based codec and a 3D wavelet-based codec, which together are able to accommodate all tools developed during the course of the project

    Livrable D4.2 of the PERSEE project : Représentation et codage 3D - Rapport intermédiaire - Définitions des softs et architecture

    Get PDF
    51Livrable D4.2 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.2 du projet. Son titre : Représentation et codage 3D - Rapport intermédiaire - Définitions des softs et architectur
    • …
    corecore