17,944 research outputs found

    SurfelMeshing: Online Surfel-Based Mesh Reconstruction

    Full text link
    We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system). In contrast to most existing approaches, we do not fuse depth measurements in a volume but in a dense surfel cloud. We asynchronously (re)triangulate the smoothed surfels to reconstruct a surface mesh. This novel approach enables to maintain a dense surface representation of the scene during SLAM which can quickly adapt to loop closures. This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera's resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin objects since objects do not need to enclose a volume. We demonstrate our approach in a number of experiments, showing that it produces reconstructions that are competitive with the state-of-the-art, and we discuss its advantages and limitations. The algorithm (excluding loop closure functionality) is available as open source at https://github.com/puzzlepaint/surfelmeshing .Comment: Version accepted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    OctNetFusion: Learning Depth Fusion from Data

    Full text link
    In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.Comment: 3DV 2017, https://github.com/griegler/octnetfusio

    Continuous time-varying biasing approach for spectrally tunable infrared detectors

    Get PDF
    In a recently demonstrated algorithmic spectral-tuning technique by Jang et al. [Opt. Express 19, 19454-19472, (2011)], the reconstruction of an object’s emissivity at an arbitrarily specified spectral window of interest in the long-wave infrared region was achieved. The technique relied upon forming a weighted superposition of a series of photocurrents from a quantum dots-in-a-well (DWELL) photodetector operated at discrete static biases that were applied serially. Here, the technique is generalized such that a continuously varying biasing voltage is employed over an extended acquisition time, in place using a series of fixed biases over each sub-acquisition time, which totally eliminates the need for the post-processing step comprising the weighted superposition of the discrete photocurrents. To enable this capability, an algorithm is developed for designing the time-varying bias for an arbitrary spectral-sensing window of interest. Since continuous-time biasing can be implemented within the readout circuit of a focal-plane array, this generalization would pave the way for the implementation of the algorithmic spectral tuning in focal-plane arrays within in each frame time without the need for on-sensor multiplications and additions. The technique is validated by means of simulations in the context of spectrometry and object classification while using experimental data for the DWELL under realistic signal-to-noise ratios
    • …
    corecore