7 research outputs found

    Efficient multiview image compression using quadtree disparity estimation

    Get PDF
    This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available

    Virtual Viewpoint Replay for a Soccer Match by View Interpolation From Multiple Cameras

    Full text link

    Single-lens multi-ocular stereovision using prism

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Novel approaches for generating video textures

    Get PDF
    Video texture, a new type of medium, can produce a new video with a continuously varying stream of images from a recorded video. It is synthesized by reordering the input video frames in a way which can be played without any visual discontinuity. However, video texture still experiences few unappealing drawbacks. For instance, video texture techniques can only generate new videos by simply rearranging the order of frames in original videos. Therefore, all the individual frames are the same as before and the result would suffer from "dead-ends" if the current frame could not discover similar frames to make a transition. In this thesis, we propose several new approaches for synthesizing video textures. These approaches adopt dimensionality reduction and regression techniques to generate video textures. Not only the frames in the resulted video textures are new, but also the "Dead end" problem is avoided. First, we have extended die work of applying principal components analysis (PCA) and autoregressive (AR) process to generate video textures by replacing PCA with five other dimension reduction techniques. Based on our experiments, using these dimensionality reduction techniques has improved the quality of video textures compared with extraction of frame signatures using PCA. The synthesized video textures may contain similar motions as the input video and will never be repeated exactly. All frames synthesized have never appeared before. We also propose a new approach for generating video textures using probabilistic principal components analysis (PPCA) and Gaussian process dynamical model (GPDM). GPDM is a nonparametric model for learning high-dimensional nonlinear dynamical data sets. We adopt PPCA and GPDM on several movie clips to synthesize video textures which contain frames that never appeared before and with similar motions as original videos. Furthermore, we have proposed two ways of generating real-time video textures by applying the incremental Isomap and incremental Spati04emporal Isomap (IST-Isomap). Both approaches can produce good real-time video texture results. In particular, IST-Isomap, that we propose, is more suitable for sparse video data (e.g. cartoon

    View Synthesis by Trinocular Edge Matching and Transfer

    No full text
    This paper presents a novel automatic method for view synthesis (or image transfer) from a triplet of uncalibrated images based on trinocular edge matching followed by transfer by interpolation, occlusion detection and correction and finally rendering. The edge-based technique proposed here is of general practical relevance because it overcomes most of the problems encountered in other approaches that either rely upon dense correspondence, work in projective space or need explicit camera calibration. Applications range from immersive media and teleconferencing, image interpolation for fast rendering and compression. 1 Introduction A number of researchers have explored ways of constructing static and temporally varying immersive scenes using real world image data alone. Initial efforts include capturing a large number of viewpoints and use these as an environment map [6] to be applied as a texture on some imaging surface. In this paper we are interested in actually generatin..
    corecore