10,644 research outputs found

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity

    Get PDF
    In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surface hypotheses even when the local tetrahedralizations do not share the same topology.This formulation allows us to obtain a constant memory consumption per sub-problem while at the same time retaining the density independent interpolation properties of the Delaunay-based optimization. On multiple public datasets, we demonstrate that our approach is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience. Further, we demonstrate the multi-scale potential of our approach by processing a newly recorded dataset with 2 billion points and a point density variation of more than four orders of magnitude - requiring less than 9GB of RAM per process.Comment: This paper was accepted to the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. The copyright was transfered to IEEE (ieee.org). The official version of the paper will be made available on IEEE Xplore (R) (ieeexplore.ieee.org). This version of the paper also contains the supplementary material, which will not appear IEEE Xplore (R

    One-step deposition of nano-to-micron-scalable, high-quality digital image correlation patterns for high-strain in-situ multi-microscopy testing

    Full text link
    Digital Image Correlation (DIC) is of vital importance in the field of experimental mechanics, yet, producing suitable DIC patterns for demanding in-situ mechanical tests remains challenging, especially for ultra-fine patterns, despite the large number of patterning techniques in the literature. Therefore, we propose a simple, flexible, one-step technique (only requiring a conventional deposition machine) to obtain scalable, high-quality, robust DIC patterns, suitable for a range of microscopic techniques, by deposition of a low melting temperature solder alloy in so-called 'island growth' mode, without elevating the substrate temperature. Proof of principle is shown by (near-)room-temperature deposition of InSn patterns, yielding highly dense, homogeneous DIC patterns over large areas with a feature size that can be tuned from as small as 10nm to 2um and with control over the feature shape and density by changing the deposition parameters. Pattern optimization, in terms of feature size, density, and contrast, is demonstrated for imaging with atomic force microscopy, scanning electron microscopy (SEM), optical microscopy and profilometry. Moreover, the performance of the InSn DIC patterns and their robustness to large deformations is validated in two challenging case studies of in-situ micro-mechanical testing: (i) self-adaptive isogeometric digital height correlation of optical surface height profiles of a coarse, bimodal InSn pattern providing microscopic 3D deformation fields (illustrated for delamination of aluminum interconnects on a polyimide substrate) and (ii) DIC on SEM images of a much finer InSn pattern allowing quantification of high strains near fracture locations (illustrated for rupture of a Fe foil). As such, the high controllability, performance and scalability of the DIC patterns offers a promising step towards more routine DIC-based in-situ micro-mechanical testing.Comment: Accepted for publication in Strai

    Scalable virtual viewpoint image synthesis for multiple camera environments

    Get PDF
    One of the main aims of emerging audio-visual (AV) applications is to provide interactive navigation within a captured event or scene. This paper presents a view synthesis algorithm that provides a scalable and flexible approach to virtual viewpoint synthesis in multiple camera environments. The multi-view synthesis (MVS) process consists of four different phases that are described in detail: surface identification, surface selection, surface boundary blending and surface reconstruction. MVS view synthesis identifies and selects only the best quality surface areas from the set of available reference images, thereby reducing perceptual errors in virtual view reconstruction. The approach is camera setup independent and scalable as virtual views can be created given 1 to N of the available video inputs. Thus, MVS provides interactive AV applications with a means to handle scenarios where camera inputs increase or decrease over time

    Enabling collaboration in virtual reality navigators

    Get PDF
    In this paper we characterize a feature superset for Collaborative Virtual Reality Environments (CVRE), and derive a component framework to transform stand-alone VR navigators into full-fledged multithreaded collaborative environments. The contributions of our approach rely on a cost-effective and extensible technique for loading software components into separate POSIX threads for rendering, user interaction and network communications, and adding a top layer for managing session collaboration. The framework recasts a VR navigator under a distributed peer-to-peer topology for scene and object sharing, using callback hooks for broadcasting remote events and multicamera perspective sharing with avatar interaction. We validate the framework by applying it to our own ALICE VR Navigator. Experimental results show that our approach has good performance in the collaborative inspection of complex models.Postprint (published version

    Predicting the Next Best View for 3D Mesh Refinement

    Full text link
    3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently, many volumetric methods have been proposed; they choose the Next Best View by reasoning over a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective, but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach which focuses on the worst reconstructed region of the environment mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and an energy function over the worst regions of the mesh which takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We test our approach over a well known dataset and achieve state-of-the-art results.Comment: 13 pages, 5 figures, to be published in IAS-1
    corecore