7 research outputs found
Efficient multiview image compression using quadtree disparity estimation
This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available
Novel approaches for generating video textures
Video texture, a new type of medium, can produce a new video with a continuously varying stream of images from a recorded video. It is synthesized by reordering the input video frames in a way which can be played without any visual discontinuity. However, video texture still experiences few unappealing drawbacks. For instance, video texture techniques can only generate new videos by simply rearranging the order of frames in original videos. Therefore, all the individual frames are the same as before and the result would suffer from "dead-ends" if the current frame could not discover similar frames to make a transition. In this thesis, we propose several new approaches for synthesizing video textures. These approaches adopt dimensionality reduction and regression techniques to generate video textures. Not only the frames in the resulted video textures are new, but also the "Dead end" problem is avoided. First, we have extended die work of applying principal components analysis (PCA) and autoregressive (AR) process to generate video textures by replacing PCA with five other dimension reduction techniques. Based on our experiments, using these dimensionality reduction techniques has improved the quality of video textures compared with extraction of frame signatures using PCA. The synthesized video textures may contain similar motions as the input video and will never be repeated exactly. All frames synthesized have never appeared before. We also propose a new approach for generating video textures using probabilistic principal components analysis (PPCA) and Gaussian process dynamical model (GPDM). GPDM is a nonparametric model for learning high-dimensional nonlinear dynamical data sets. We adopt PPCA and GPDM on several movie clips to synthesize video textures which contain frames that never appeared before and with similar motions as original videos. Furthermore, we have proposed two ways of generating real-time video textures by applying the incremental Isomap and incremental Spati04emporal Isomap (IST-Isomap). Both approaches can produce good real-time video texture results. In particular, IST-Isomap, that we propose, is more suitable for sparse video data (e.g. cartoon
Segment-based stereo matching algorithm with rectification for single-lens bi-prism stereovision system
Ph.DDOCTOR OF PHILOSOPH
Recommended from our members
View synthesis for kinetic depth X-ray imaging
This thesis reports the development and analysis of feature based synthesis of transmission X-ray images. The synthetic imagery is formed through matching and morphing or warping line-scan format images produced by a novel multi-view X-ray machine. In this way video type sequences, which periodically alternate between synthetic and detector based views, may be formed. The purpose of these sequences is to provide depth from motion or kinetic depth effect (KDE) in a visual display; while the role of the synthesis is to reduce the total number of detector arrays, associated collimators and X-ray flux per inspection. A specific challenge is to explore the bounds for producing synthetic imagery that can be seamlessly introduced into the resultant sequences. This work is distinct from the image collection and display technique, termed KDEX, previously undertaken by the Imaging Science Group at NTU. The ultimate aim of the research programme in collaboration with The UK Home Office and The US Dept. of Homeland Security is to enhance the detection and identification of threats in X-ray scans of luggage. A multi-view „KDEX scanner‟ was employed to collect greyscale and colour coded image sequences of 30 different bags; each sequence comprised of 7 perspective views separated from one another by 10. This imagery was organised and stored in a database to enable a coherent series of experiments to be conducted. Corresponding features in sequential pairs of images, at various different angular separations, were identified by applying a scale invariant feature transform (SIFT)
View Synthesis by Trinocular Edge Matching and Transfer
This paper presents a novel automatic method for view synthesis (or image transfer) from a triplet of uncalibrated images based on trinocular edge matching followed by transfer by interpolation, occlusion detection and correction and finally rendering. The edge-based technique proposed here is of general practical relevance because it overcomes most of the problems encountered in other approaches that either rely upon dense correspondence, work in projective space or need explicit camera calibration. Applications range from immersive media and teleconferencing, image interpolation for fast rendering and compression. 1 Introduction A number of researchers have explored ways of constructing static and temporally varying immersive scenes using real world image data alone. Initial efforts include capturing a large number of viewpoints and use these as an environment map [6] to be applied as a texture on some imaging surface. In this paper we are interested in actually generatin..