27,833 research outputs found

    Model- and image-based scene representation.

    Get PDF
    Lee Kam Sum.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 97-101).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.2Chapter 1.1 --- Video representation using panorama mosaic and 3D face model --- p.2Chapter 1.2 --- Mosaic-based Video Representation --- p.3Chapter 1.3 --- "3D Human Face modeling ," --- p.7Chapter 2 --- Background --- p.13Chapter 2.1 --- Video Representation using Mosaic Image --- p.13Chapter 2.1.1 --- Traditional Video Compression --- p.17Chapter 2.2 --- 3D Face model Reconstruction via Multiple Views --- p.19Chapter 2.2.1 --- Shape from Silhouettes --- p.19Chapter 2.2.2 --- Head and Face Model Reconstruction --- p.22Chapter 2.2.3 --- Reconstruction using Generic Model --- p.24Chapter 3 --- System Overview --- p.27Chapter 3.1 --- Panoramic Video Coding Process --- p.27Chapter 3.2 --- 3D Face model Reconstruction Process --- p.28Chapter 4 --- Panoramic Video Representation --- p.32Chapter 4.1 --- Mosaic Construction --- p.32Chapter 4.1.1 --- Cylindrical Panorama Mosaic --- p.32Chapter 4.1.2 --- Cylindrical Projection of Mosaic Image --- p.34Chapter 4.2 --- Foreground Segmentation and Registration --- p.37Chapter 4.2.1 --- Segmentation Using Panorama Mosaic --- p.37Chapter 4.2.2 --- Determination of Background by Local Processing --- p.38Chapter 4.2.3 --- Segmentation from Frame-Mosaic Comparison --- p.40Chapter 4.3 --- Compression of the Foreground Regions --- p.44Chapter 4.3.1 --- MPEG-1 Compression --- p.44Chapter 4.3.2 --- MPEG Coding Method: I/P/B Frames --- p.45Chapter 4.4 --- Video Stream Reconstruction --- p.48Chapter 5 --- Three Dimensional Human Face modeling --- p.52Chapter 5.1 --- Capturing Images for 3D Face modeling --- p.53Chapter 5.2 --- Shape Estimation and Model Deformation --- p.55Chapter 5.2.1 --- Head Shape Estimation and Model deformation --- p.55Chapter 5.2.2 --- Face organs shaping and positioning --- p.58Chapter 5.2.3 --- Reconstruction with both intrinsic and extrinsic parameters --- p.59Chapter 5.2.4 --- Reconstruction with only Intrinsic Parameter --- p.63Chapter 5.2.5 --- Essential Matrix --- p.65Chapter 5.2.6 --- Estimation of Essential Matrix --- p.66Chapter 5.2.7 --- Recovery of 3D Coordinates from Essential Matrix --- p.67Chapter 5.3 --- Integration of Head Shape and Face Organs --- p.70Chapter 5.4 --- Texture-Mapping --- p.71Chapter 6 --- Experimental Result & Discussion --- p.74Chapter 6.1 --- Panoramic Video Representation --- p.74Chapter 6.1.1 --- Compression Improvement from Foreground Extraction --- p.76Chapter 6.1.2 --- Video Compression Performance --- p.78Chapter 6.1.3 --- Quality of Reconstructed Video Sequence --- p.80Chapter 6.2 --- 3D Face model Reconstruction --- p.91Chapter 7 --- Conclusion and Future Direction --- p.94Bibliography --- p.10

    T-spline based unifying registration procedure for free-form surface workpieces in intelligent CMM

    Get PDF
    With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs). To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs

    Visibility Constrained Generative Model for Depth-based 3D Facial Pose Tracking

    Full text link
    In this paper, we propose a generative framework that unifies depth-based 3D facial pose tracking and face model adaptation on-the-fly, in the unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Specifically, we introduce a statistical 3D morphable model that flexibly describes the distribution of points on the surface of the face model, with an efficient switchable online adaptation that gradually captures the identity of the tracked subject and rapidly constructs a suitable face model when the subject changes. Moreover, unlike prior art that employed ICP-based facial pose estimation, to improve robustness to occlusions, we propose a ray visibility constraint that regularizes the pose based on the face model's visibility with respect to the input point cloud. Ablation studies and experimental results on Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective and outperforms completing state-of-the-art depth-based methods

    Mesh-based Autoencoders for Localized Deformation Component Analysis

    Full text link
    Spatially localized deformation components are very useful for shape analysis and synthesis in 3D geometry processing. Several methods have recently been developed, with an aim to extract intuitive and interpretable deformation components. However, these techniques suffer from fundamental limitations especially for meshes with noise or large-scale deformations, and may not always be able to identify important deformation components. In this paper we propose a novel mesh-based autoencoder architecture that is able to cope with meshes with irregular topology. We introduce sparse regularization in this framework, which along with convolutional operations, helps localize deformations. Our framework is capable of extracting localized deformation components from mesh data sets with large-scale deformations and is robust to noise. It also provides a nonlinear approach to reconstruction of meshes using the extracted basis, which is more effective than the current linear combination approach. Extensive experiments show that our method outperforms state-of-the-art methods in both qualitative and quantitative evaluations

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl
    • …
    corecore