1,239 research outputs found

    Point Cloud Normal Estimation with Graph-Convolutional Neural Networks

    Get PDF
    Surface normal estimation is a basic task for many point cloud processing algorithms. However, it can be challenging to capture the local geometry of the data, especially in presence of noise. Recently, deep learning approaches have shown promising results. Nevertheless, applying convolutional neural networks to point clouds is not straightforward, due to the irregular positioning of the points. In this paper, we propose a normal estimation method based on graph-convolutional neural networks to deal with such irregular point cloud domain. The graph-convolutional layers build hierarchies of localized features to solve the estimation problem. We show state-ofthe-art performance and robust results even in presence of noise

    Point cloud data compression

    Get PDF
    The rapid growth in the popularity of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) experiences have resulted in an exponential surge of three-dimensional data. Point clouds have emerged as a commonly employed representation for capturing and visualizing three-dimensional data in these environments. Consequently, there has been a substantial research effort dedicated to developing efficient compression algorithms for point cloud data. This Master's thesis aims to investigate the current state-of-the-art lossless point cloud geometry compression techniques, explore some of these techniques in more detail and then propose improvements and/or extensions to enhance them and provide directions for future work on this topic

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Graph-based segmentation of range data with applications to 3D urban mapping

    Get PDF
    This paper presents an efficient graph-based algorithm for the segmentation of planar regions out of 3D range maps of urban areas. Segmentation of planar surfaces in urban scenarios is challenging because the data acquired is typically sparsely sampled, incomplete, and noisy. The algorithm is motivated by Felzenszwalb’s algorithm to 2D image segmentation [8], and is extended to deal with non-uniformly sampled 3D range data using an approximate nearest neighbor search. Interpoint distances are sorted in increasing order and this list of distances is traversed growing planar regions that satisfy both local and global variation of distance and curvature. The algorithm runs in O(n log n) and compares favorably with other region growing mechanisms based on Expectation Maximization. Experiments carried out with real data acquired in an outdoor urban environment demonstrate that our approach is well-suited to segment planar surfaces from noisy 3D range data. A pair of applications of the segmented results are shown, a) to derive traversability maps, and b) to calibrate a camera network.Peer ReviewedPostprint (published version

    Multilinear Wavelets: A Statistical Shape Space for Human Faces

    Full text link
    We present a statistical model for 33D human faces in varying expression, which decomposes the surface of the face using a wavelet transform, and learns many localized, decorrelated multilinear models on the resulting coefficients. Using this model we are able to reconstruct faces from noisy and occluded 33D face scans, and facial motion sequences. Accurate reconstruction of face shape is important for applications such as tele-presence and gaming. The localized and multi-scale nature of our model allows for recovery of fine-scale detail while retaining robustness to severe noise and occlusion, and is computationally efficient and scalable. We validate these properties experimentally on challenging data in the form of static scans and motion sequences. We show that in comparison to a global multilinear model, our model better preserves fine detail and is computationally faster, while in comparison to a localized PCA model, our model better handles variation in expression, is faster, and allows us to fix identity parameters for a given subject.Comment: 10 pages, 7 figures; accepted to ECCV 201
    • …
    corecore