1,161 research outputs found
Acquisition, compression and rendering of depth and texture for multi-view video
Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized cameras, which are capturing the same scene from different viewpoints. This technique especially enables applications such as freeviewpoint video or 3D-TV. Free-viewpoint video applications provide the feature to interactively select and render a virtual viewpoint of the scene. A 3D experience such as for example in 3D-TV is obtained if the data representation and display enable to distinguish the relief of the scene, i.e., the depth within the scene. With 3D-TV, the depth of the scene can be perceived using a multi-view display that renders simultaneously several views of the same scene. To render these multiple views on a remote display, an efficient transmission, and thus compression of the multi-view video is necessary. However, a major problem when dealing with multiview video is the intrinsically large amount of data to be compressed, decompressed and rendered. We aim at an efficient and flexible multi-view video system, and explore three different aspects. First, we develop an algorithm for acquiring a depth signal from a multi-view setup. Second, we present efficient 3D rendering algorithms for a multi-view signal. Third, we propose coding techniques for 3D multi-view signals, based on the use of an explicit depth signal. This motivates that the thesis is divided in three parts. The first part (Chapter 3) addresses the problem of 3D multi-view video acquisition. Multi-view video acquisition refers to the task of estimating and recording a 3D geometric description of the scene. A 3D description of the scene can be represented by a so-called depth image, which can be estimated by triangulation of the corresponding pixels in the multiple views. Initially, we focus on the problem of depth estimation using two views, and present the basic geometric model that enables the triangulation of corresponding pixels across the views. Next, we review two calculation/optimization strategies for determining corresponding pixels: a local and a one-dimensional optimization strategy. Second, to generalize from the two-view case, we introduce a simple geometric model for estimating the depth using multiple views simultaneously. Based on this geometric model, we propose a new multi-view depth-estimation technique, employing a one-dimensional optimization strategy that (1) reduces the noise level in the estimated depth images and (2) enforces consistent depth images across the views. The second part (Chapter 4) details the problem of multi-view image rendering. Multi-view image rendering refers to the process of generating synthetic images using multiple views. Two different rendering techniques are initially explored: a 3D image warping and a mesh-based rendering technique. Each of these methods has its limitations and suffers from either high computational complexity or low image rendering quality. As a consequence, we present two image-based rendering algorithms that improves the balance on the aforementioned issues. First, we derive an alternative formulation of the relief texture algorithm which was extented to the geometry of multiple views. The proposed technique features two advantages: it avoids rendering artifacts ("holes") in the synthetic image and it is suitable for execution on a standard Graphics Processor Unit (GPU). Second, we propose an inverse mapping rendering technique that allows a simple and accurate re-sampling of synthetic pixels. Experimental comparisons with 3D image warping show an improvement of rendering quality of 3.8 dB for the relief texture mapping and 3.0 dB for the inverse mapping rendering technique. The third part concentrates on the compression problem of multi-view texture and depth video (Chapters 5–7). In Chapter 5, we extend the standard H.264/MPEG-4 AVC video compression algorithm for handling the compression of multi-view video. As opposed to the Multi-view Video Coding (MVC) standard that encodes only the multi-view texture data, the proposed encoder peforms the compression of both the texture and the depth multi-view sequences. The proposed extension is based on exploiting the correlation between the multiple camera views. To this end, two different approaches for predictive coding of views have been investigated: a block-based disparity-compensated prediction technique and a View Synthesis Prediction (VSP) scheme. Whereas VSP relies on an accurate depth image, the block-based disparity-compensated prediction scheme can be performed without any geometry information. Our encoder adaptively selects the most appropriate prediction scheme using a rate-distortion criterion for an optimal prediction-mode selection. We present experimental results for several texture and depth multi-view sequences, yielding a quality improvement of up to 0.6 dB for the texture and 3.2 dB for the depth, when compared to solely performing H.264/MPEG-4AVC disparitycompensated prediction. Additionally, we discuss the trade-off between the random-access to a user-selected view and the coding efficiency. Experimental results illustrating and quantifying this trade-off are provided. In Chapter 6, we focus on the compression of a depth signal. We present a novel depth image coding algorithm which concentrates on the special characteristics of depth images: smooth regions delineated by sharp edges. The algorithm models these smooth regions using parameterized piecewiselinear functions and sharp edges by a straight line, so that it is more efficient than a conventional transform-based encoder. To optimize the quality of the coding system for a given bit rate, a special global rate-distortion optimization balances the rate against the accuracy of the signal representation. For typical bit rates, i.e., between 0.01 and 0.25 bit/pixel, experiments have revealed that the coder outperforms a standard JPEG-2000 encoder by 0.6-3.0 dB. Preliminary results were published in the Proceedings of 26th Symposium on Information Theory in the Benelux. In Chapter 7, we propose a novel joint depth-texture bit-allocation algorithm for the joint compression of texture and depth images. The described algorithm combines the depth and texture Rate-Distortion (R-D) curves, to obtain a single R-D surface that allows the optimization of the joint bit-allocation in relation to the obtained rendering quality. Experimental results show an estimated gain of 1 dB compared to a compression performed without joint bit-allocation optimization. Besides this, our joint R-D model can be readily integrated into an multi-view H.264/MPEG-4 AVC coder because it yields the optimal compression setting with a limited computation effort
Image editing and interaction tools for visual expression
Digital photography is becoming extremely common in our daily life. However, images are difficult to edit and interact with. From a user's perspective, it is important to interact freely with the images on his/her smartphone or ipad. In this thesis we develop several image editing and interaction systems with this idea in mind. We aim for creating visual models with pre-computed internal structures
such that interaction is readily supported. We demonstrate that such interactable models,
driven by a user's hand, can render powerful visual expressiveness, and make static pixel arrays much more fun to play with.
The first system harnesses the editing power of vector graphics. We convert raster images
into a vector representation using Loop's subdivision surfaces. An image is represented by a multi-resolution feature-preserving sparse control mesh, with which image editing can be done at semantic level. A user can easily put a smile on a face image, or adjust the level of scene abstractness through a simple slider. The second system allows one to insert an object from image into a new scene. The key is to correct the shading on the object such that it goes consistently with the scene. Unlike traditional approach, we use a simple shape to
capture gross shading effects and a set of shading detail images to account for visual complexities. The high-frequency nature of these detail images allows a moderate range of interactive composition effects without causing alarming visual artifacts. The third system is on video clips instead of a single image. We proposed a fully automated algorithm to creat
Parameterization-based Neural Network: Predicting Non-linear Stress-Strain Response of Composites
Composite materials like syntactic foams have complex internal
microstructures that manifest high-stress concentrations due to material
discontinuities occurring from hollow regions and thin walls of hollow
particles or microballoons embedded in a continuous medium. Predicting the
mechanical response as non-linear stress-strain curves of such heterogeneous
materials from their microstructure is a challenging problem. This is true
since various parameters, including the distribution and geometric properties
of microballoons, dictate their response to mechanical loading. To that end,
this paper presents a novel Neural Network (NN) framework called
Parameterization-based Neural Network (PBNN), where we relate the composite
microstructure to the non-linear response through this trained NN model. PBNN
represents the stress-strain curve as a parameterized function to reduce the
prediction size and predicts the function parameters for different syntactic
foam microstructures. We show that our approach can predict more accurate
non-linear stress-strain responses and corresponding parameterized functions
using smaller datasets than existing approaches. This is enabled by extracting
high-level features from the geometry data and tuning the predicted response
through an auxiliary term prediction. Although built in the context of the
compressive response prediction of syntactic foam composites, our NN framework
applies to predict generic non-linear responses for heterogeneous materials
with internal microstructures. Hence, our novel PBNN is anticipated to inspire
more parameterization-related studies in different Machine Learning methods
Coherent multi-dimensional segmentation of multiview images using a variational framework and applications to image based rendering
Image Based Rendering (IBR) and in particular light field rendering has attracted a lot of
attention for interpolating new viewpoints from a set of multiview images. New images of
a scene are interpolated directly from nearby available ones, thus enabling a photorealistic
rendering. Sampling theory for light fields has shown that exact geometric information
in the scene is often unnecessary for rendering new views. Indeed, the band of the function
is approximately limited and new views can be rendered using classical interpolation
methods. However, IBR using undersampled light fields suffers from aliasing effects and
is difficult particularly when the scene has large depth variations and occlusions. In order
to deal with these cases, we study two approaches:
New sampling schemes have recently emerged that are able to perfectly reconstruct
certain classes of parametric signals that are not bandlimited but characterized by a finite
number of parameters. In this context, we derive novel sampling schemes for piecewise
sinusoidal and polynomial signals. In particular, we show that a piecewise sinusoidal signal
with arbitrarily high frequencies can be exactly recovered given certain conditions. These
results are applied to parametric multiview data that are not bandlimited.
We also focus on the problem of extracting regions (or layers) in multiview images
that can be individually rendered free of aliasing. The problem is posed in a multidimensional
variational framework using region competition. In extension to previous
methods, layers are considered as multi-dimensional hypervolumes. Therefore the segmentation
is done jointly over all the images and coherence is imposed throughout the
data. However, instead of propagating active hypersurfaces, we derive a semi-parametric
methodology that takes into account the constraints imposed by the camera setup and the
occlusion ordering. The resulting framework is a global multi-dimensional region competition that is consistent in all the images and efficiently handles occlusions. We show the
validity of the approach with captured light fields. Other special effects such as augmented
reality and disocclusion of hidden objects are also demonstrated
Discontinuity-Aware Base-Mesh Modeling of Depth for Scalable Multiview Image Synthesis and Compression
This thesis is concerned with the challenge of deriving disparity from sparsely communicated depth for performing disparity-compensated view synthesis for compression and rendering of multiview images. The modeling of depth is essential for deducing disparity at view locations where depth is not available and is also critical for visibility reasoning and occlusion handling.
This thesis first explores disparity derivation methods and disparity-compensated view synthesis approaches. Investigations reveal the merits of adopting a piece-wise continuous mesh description of depth for deriving disparity at target view locations to enable disparity-compensated backward warping of texture. Visibility information can be reasoned due to the correspondence relationship between views that a mesh model provides, while the connectivity of a mesh model assists in resolving depth occlusion.
The recent JPEG 2000 Part-17 extension defines tools for scalable coding of discontinuous media using breakpoint-dependent DWT, where breakpoints describe discontinuity boundary geometry. This thesis proposes a method to efficiently reconstruct depth coded using JPEG 2000 Part-17 as a piece-wise continuous mesh, where discontinuities are driven by the encoded breakpoints. Results show that the proposed mesh can accurately represent decoded depth while its complexity scales along with decoded depth quality.
The piece-wise continuous mesh model anchored at a single viewpoint or base-view can be augmented to form a multi-layered structure where the underlying layers carry depth information of regions that are occluded at the base-view. Such a consolidated mesh representation is termed a base-mesh model and can be projected to many viewpoints, to deduce complete disparity fields between any pair of views that are inherently consistent. Experimental results demonstrate the superior performance of the base-mesh model in multiview synthesis and compression compared to other state-of-the-art methods, including the JPEG Pleno light field codec. The proposed base-mesh model departs greatly from conventional pixel-wise or block-wise depth models and their forward depth mapping for deriving disparity ingrained in existing multiview processing systems.
When performing disparity-compensated view synthesis, there can be regions for which reference texture is unavailable, and inpainting is required. A new depth-guided texture inpainting algorithm is proposed to restore occluded texture in regions where depth information is either available or can be inferred using the base-mesh model
A characteristic particle method for traffic flow simulations on highway networks
A characteristic particle method for the simulation of first order
macroscopic traffic models on road networks is presented. The approach is based
on the method "particleclaw", which solves scalar one dimensional hyperbolic
conservations laws exactly, except for a small error right around shocks. The
method is generalized to nonlinear network flows, where particle approximations
on the edges are suitably coupled together at the network nodes. It is
demonstrated in numerical examples that the resulting particle method can
approximate traffic jams accurately, while only devoting a few degrees of
freedom to each edge of the network.Comment: 15 pages, 5 figures. Accepted to the proceedings of the Sixth
International Workshop Meshfree Methods for PDE 201
Recommended from our members
Body surface imaging
An embodiment of the invention may provide a portable, inexpensive two-view 3D stereo vision imaging system, which acquires a 3D surface model and dimensions of an object. The system may comprise front-side and back-side stereo imagers which each have a projector and at least two digital cameras to image the object from different perspectives. An embodiment may include a method for reconstructing an image of a human body from the data of a two-view body scanner by obtaining a front scan image data point set and a back scan image data point set. A smooth body image may be gained by processing the data point sets using the following steps: (1) data resampling; (2) initial mesh generation; (3) mesh simplification; and (4) mesh subdivision and optimization.Board of Regents, University of Texas Syste
- …