1,473 research outputs found

    A robust motion estimation and segmentation approach to represent moving images with layers

    Get PDF
    The paper provides a robust representation of moving images based on layers. To that goal, we have designed efficient motion estimation and segmentation techniques by affine model fitting suitable for the construction of layers. Layered representations, originally introduced by Wang and Adelson (see IEEE Transactions on Image Processing, vol.3, no.5, p.625-38, 1994) are important in several applications. In particular they are very appropriate for object tracking, object manipulation and content-based scalability which are among the main functionalities of the future MPEG-4 standard. In addition a variety of examples are provided that give a deep insight into the performance bounds of the representation of moving images using layers.Peer ReviewedPostprint (published version

    Context-based coding of bilevel images enhanced by digital straight line analysis

    Get PDF

    Depth map compression via 3D region-based representation

    Get PDF
    In 3D video, view synthesis is used to create new virtual views between encoded camera views. Errors in the coding of the depth maps introduce geometry inconsistencies in synthesized views. In this paper, a new 3D plane representation of the scene is presented which improves the performance of current standard video codecs in the view synthesis domain. Two image segmentation algorithms are proposed for generating a color and depth segmentation. Using both partitions, depth maps are segmented into regions without sharp discontinuities without having to explicitly signal all depth edges. The resulting regions are represented using a planar model in the 3D world scene. This 3D representation allows an efficient encoding while preserving the 3D characteristics of the scene. The 3D planes open up the possibility to code multiview images with a unique representation.Postprint (author's final draft

    Object-based video representations: shape compression and object segmentation

    Get PDF
    Object-based video representations are considered to be useful for easing the process of multimedia content production and enhancing user interactivity in multimedia productions. Object-based video presents several new technical challenges, however. Firstly, as with conventional video representations, compression of the video data is a requirement. For object-based representations, it is necessary to compress the shape of each video object as it moves in time. This amounts to the compression of moving binary images. This is achieved by the use of a technique called context-based arithmetic encoding. The technique is utilised by applying it to rectangular pixel blocks and as such it is consistent with the standard tools of video compression. The blockbased application also facilitates well the exploitation of temporal redundancy in the sequence of binary shapes. For the first time, context-based arithmetic encoding is used in conjunction with motion compensation to provide inter-frame compression. The method, described in this thesis, has been thoroughly tested throughout the MPEG-4 core experiment process and due to favourable results, it has been adopted as part of the MPEG-4 video standard. The second challenge lies in the acquisition of the video objects. Under normal conditions, a video sequence is captured as a sequence of frames and there is no inherent information about what objects are in the sequence, not to mention information relating to the shape of each object. Some means for segmenting semantic objects from general video sequences is required. For this purpose, several image analysis tools may be of help and in particular, it is believed that video object tracking algorithms will be important. A new tracking algorithm is developed based on piecewise polynomial motion representations and statistical estimation tools, e.g. the expectationmaximisation method and the minimum description length principle

    Object-based coding for plenoptic videos

    Get PDF
    A new object-based coding system for a class of dynamic image-based representations called plenoptic videos (PVs) is proposed. PVs are simplified dynamic light fields, where the videos are taken at regularly spaced locations along line segments instead of a 2-D plane. In the proposed object-based approach, objects at different depth values are segmented to improve the rendering quality. By encoding PVs at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with an individual image-based rendering (IBR) object can be achieved. Besides supporting the coding of texture and binary shape maps for IBR objects with arbitrary shapes, the proposed system also supports the coding of grayscale alpha maps as well as depth maps (geometry information) to respectively facilitate the matting and rendering of the IBR objects. Both temporal and spatial redundancies among the streams in the PV are exploited to improve the coding performance, while avoiding excessive complexity in selective decoding of PVs to support fast rendering speed. Advanced spatial/temporal prediction methods such as global disparity-compensated prediction, as well as direct prediction and its extensions are developed. The bit allocation and rate control scheme employing a new convex optimization-based approach are also introduced. Experimental results show that considerable improvements in coding performance are obtained for both synthetic and real scenes, while supporting the stated object-based functionalities. © 2006 IEEE.published_or_final_versio

    On object-based compression for a class of dynamic image-based representations

    Get PDF
    An object-based compression scheme for a class of dynamic image-based representations called "plenoptic videos" (PVs) is studied in this paper. PVs are simplified dynamic light fields in which the videos are taken at regularly spaced locations along a line segment instead of a 2-D plane. To improve the rendering quality in scenes with large depth variations and support the functionalities at the object level for rendering, an object-based compression scheme is employed for the coding of PVs. Besides texture and shape information, the compression of geometry information in the form of depth maps is also supported. The proposed compression scheme exploits both the temporal and spatial redundancy among video object streams in the PV to achieve higher compression efficiency. Experimental results show that considerable improvements in coding performance are obtained for both synthetic and real scenes. Moreover, object-based functionalities such as rendering individual image-based objects are also illustrated. © 2005 IEEE.published_or_final_versio

    An object-based compression system for a class of dynamic image-based representations

    Get PDF
    S P I E Conference on Visual Communications and Image Processing, Beijing, China, 12-15 July 2005This paper proposes a new object-based compression system for a class of dynamic image-based representations called plenoptic videos (PVs). PVs are simplified dynamic light fields, where the videos are taken at regularly spaced locations along line segments instead of a 2-D plane. The proposed system employs an object-based approach, where objects at different depth values are segmented to improve the rendering quality as in the pop-up light fields. Furthermore, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects can be achieved. Besides supporting the coding of the texture and binary shape maps for IBR objects with arbitrary shapes, the proposed system also supports the coding of gray-scale alpha maps as well as geometry information in the form of depth maps to respectively facilitate the matting and rendering of the IBR objects. To improve the coding performance, the proposed compression system exploits both the temporal redundancy and spatial redundancy among the video object streams in the PV by employing disparity-compensated prediction or spatial prediction in its texture, shape and depth coding processes. To demonstrate the principle and effectiveness of the proposed system, a multiple video camera system was built and experimental results show that considerable improvements in coding performance are obtained for both synthetic scene and real scene, while supporting the stated object-based functionalities.published_or_final_versio

    Depth-based Multi-View 3D Video Coding

    Get PDF

    A multi-camera approach to image-based rendering and 3-D/Multiview display of ancient chinese artifacts

    Get PDF
    published_or_final_versio

    Progressive contour coding in the wavelet domain

    Get PDF
    This paper presents a new wavelet-based image contour coding technique, suitable for representing either shapes or generic contour maps. Starting from a contour map (e.g. a segmentation map or the result of an edge detector process), a unique one-dimensional signal is generated from the set of contour points. Coordinate jumps between contour extremities when under a tolerance threshold represent signal discontinuities but they can still be compactly coded in the wavelet domain. Exceeding threshold discontinuities are coded as side information. This side information and the amount of remaining discontinuity are minimized by an optimized contour segment sequencing. The obtained 1D signal is decomposed and coded in the wavelet domain by using a 1D extension of the SPIHT algorithm. The described technique can efficiently code any kind of 2D contour map, from one to many unconnected contour segments. It guarantees a fully embedded progressive coding, state-of-art coding performance, good approximation capabilities for both open and closed contours, and graceful visual degradation at low bit-rates
    • 

    corecore