105 research outputs found
Transmission of 3D Scenes over Lossy Channels
This paper introduces a novel error correction scheme for the transmission of three-dimensional scenes over unreliable networks. We propose a novel Unequal Error Protection scheme for the transmission of depth and texture information that distributes a prefixed amount of redundancy among the various elements of the scene description in order to maximize the quality of the rendered views. This target is achieved exploiting also a new model for the estimation of the impact on the rendered views of the various geometry and texture packets which takes into account their relevance in the coded bitstream and the viewpoint required by the user. Experimental results show how the proposed scheme effectively enhances the quality of the rendered images in a typical depth-image-based rendering scenario as packets are progressively decoded/recovered by the receiver
Extracting and Re-rendering Structured Auditory Scenes from Field Recordings
International audienceWe present an approach to automatically extract and re-render a structured auditory scene from field recordings obtained with a small set of microphones, freely positioned in the environment. From the recordings and the calibrated position of the microphones, the 3D location of various auditory events can be estimated together with their corresponding content. This structured description is reproduction-setup independent. We propose solutions to classify foreground, well-localized sounds and more diffuse background ambiance and adapt our rendering strategy accordingly. Warping the original recordings during playback allows for simulating smooth changes in the listening point or position of sources. Comparisons to reference binaural and B-format recordings show that our approach achieves good spatial rendering while remaining independent of the reproduction setup and offering extended authoring capabilities
Stereoscopic high dynamic range imaging
Two modern technologies show promise to dramatically increase immersion in
virtual environments. Stereoscopic imaging captures two images representing
the views of both eyes and allows for better depth perception. High dynamic
range (HDR) imaging accurately represents real world lighting as opposed to
traditional low dynamic range (LDR) imaging. HDR provides a better contrast
and more natural looking scenes. The combination of the two technologies in
order to gain advantages of both has been, until now, mostly unexplored due to
the current limitations in the imaging pipeline. This thesis reviews both fields,
proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the
challenges that need to be resolved to enable SHDR and focuses on capture and
compression aspects of that pipeline.
The problems of capturing SHDR images that would potentially require two
HDR cameras and introduce ghosting, are mitigated by capturing an HDR and
LDR pair and using it to generate SHDR images. A detailed user study compared
four different methods of generating SHDR images. Results demonstrated that
one of the methods may produce images perceptually indistinguishable from the
ground truth.
Insights obtained while developing static image operators guided the design
of SHDR video techniques. Three methods for generating SHDR video from an
HDR-LDR video pair are proposed and compared to the ground truth SHDR
videos. Results showed little overall error and identified a method with the least
error.
Once captured, SHDR content needs to be efficiently compressed. Five SHDR
compression methods that are backward compatible are presented. The proposed
methods can encode SHDR content to little more than that of a traditional single
LDR image (18% larger for one method) and the backward compatibility property
encourages early adoption of the format.
The work presented in this thesis has introduced and advanced capture and
compression methods for the adoption of SHDR imaging. In general, this research
paves the way for a novel field of SHDR imaging which should lead to improved
and more realistic representation of captured scenes
Surface Reflectance Estimation and Natural Illumination Statistics
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method
Scalable Remote Rendering using Synthesized Image Quality Assessment
Depth-image-based rendering (DIBR) is widely used to support 3D interactive graphics on low-end mobile devices. Although it reduces the rendering cost on a mobile device, it essentially turns such a cost into depth image transmission cost or bandwidth consumption, inducing performance bottleneck to a remote rendering system. To address this problem, we design a scalable remote rendering framework based on synthesized image quality assessment. Specially, we design an efficient synthesized image quality metric based on Just Noticeable Distortion (JND), properly measuring human perceived geometric distortions in synthesized images. Based on this, we predict quality-aware reference viewpoints, with viewpoint intervals optimized by the JND-based metric. An adaptive transmission scheme is also developed to control depth image transmission based on perceived quality and network bandwidth availability. Experiment results show that our approach effectively reduces transmission frequency and network bandwidth consumption with perceived quality on mobile devices maintained. A prototype system is implemented to demonstrate the scalability of our proposed framework to multiple clients
Variable Block Size Motion Compensation In The Redundant Wavelet Domain
Video is one of the most powerful forms of multimedia because of the extensive information it delivers. Video sequences are highly correlated both temporally and spatially, a fact which makes the compression of video possible. Modern video systems employ motion estimation and motion compensation (ME/MC) to de-correlate a video sequence temporally. ME/MC forms a prediction of the current frame using the frames which have been already encoded. Consequently, one needs to transmit the corresponding residual image instead of the original frame, as well as a set of motion vectors which describe the scene motion as observed at the encoder. The redundant wavelet transform (RDWT) provides several advantages over the conventional wavelet transform (DWT). The RDWT overcomes the shift invariant problem in DWT. Moreover, RDWT retains all the phase information of wavelet coefficients and provides multiple prediction possibilities for ME/MC in wavelet domain. The general idea of variable size block motion compensation (VSBMC) technique is to partition a frame in such a way that regions with uniform translational motions are divided into larger blocks while those containing complicated motions into smaller blocks, leading to an adaptive distribution of motion vectors (MV) across the frame. The research proposed new adaptive partitioning schemes and decision criteria in RDWT that utilize more effectively the motion content of a frame in terms of various block sizes. The research also proposed a selective subpixel accuracy algorithm for the motion vector using a multiband approach. The selective subpixel accuracy reduces the computations produced by the conventional subpixel algorithm while maintaining the same accuracy. In addition, the method of overlapped block motion compensation (OBMC) is used to reduce blocking artifacts. Finally, the research extends the applications of the proposed VSBMC to the 3D video sequences. The experimental results obtained here have shown that VSBMC in the RDWT domain can be a powerful tool for video compression
CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs
Computer Graphics (CG) has achieved a high level of realism, producing strikingly vivid images. This realism, however, comes at the cost of long and often expensive manual modeling, and most often humans can still distinguish between CG images and real images. We present a novel method to make CG images look more realistic that is simple and accessible to novice users. Our system uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a novel mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our improved CG images appear more realistic than the originals
Recommended from our members
Models of Visual Appearance for Analyzing and Editing Images and Videos
The visual appearance of an image is a complex function of factors such as scene geometry, material reflectances and textures, illumination, and the properties of the camera used to capture the image. Understanding how these factors interact to produce an image is a fundamental problem in computer vision and graphics. This dissertation examines two aspects of this problem: models of visual appearance that allow us to recover scene properties from images and videos, and tools that allow users to manipulate visual appearance in images and videos in intuitive ways. In particular, we look at these problems in three different applications. First, we propose techniques for compositing images that differ significantly in their appearance. Our framework transfers appearance between images by manipulating the different levels of a multi-scale decomposition of the image. This allows users to create realistic composites with minimal interaction in a number of different scenarios. We also discuss techniques for compositing and replacing facial performances in videos. Second, we look at the problem of creating high-quality still images from low-quality video clips. Traditional multi-image enhancement techniques accomplish this by inverting the camera’s imaging process. Our system incorporates feature weights into these image models to create results that have better resolution, noise, and blur characteristics, and summarize the activity in the video. Finally, we analyze variations in scene appearance caused by changes in lighting. We develop a model for outdoor scene appearance that allows us to recover radiometric and geometric infor- mation about the scene from images. We apply this model to a variety of visual tasks, including color-constancy, background subtraction, shadow detection, scene reconstruction, and camera geo-location. We also show that the appearance of a Lambertian scene can be modeled as a combi- nation of distinct three-dimensional illumination subspaces — a result that leads to novel bounds on scene appearance, and a robust uncalibrated photometric stereo method.Engineering and Applied Science
- …