11,087 research outputs found

    LDSO: Direct Sparse Odometry with Loop Closure

    Full text link
    In this paper we present an extension of Direct Sparse Odometry (DSO) to a monocular visual SLAM system with loop closure detection and pose-graph optimization (LDSO). As a direct technique, DSO can utilize any image pixel with sufficient intensity gradient, which makes it robust even in featureless areas. LDSO retains this robustness, while at the same time ensuring repeatability of some of these points by favoring corner features in the tracking frontend. This repeatability allows to reliably detect loop closure candidates with a conventional feature-based bag-of-words (BoW) approach. Loop closure candidates are verified geometrically and Sim(3) relative pose constraints are estimated by jointly minimizing 2D and 3D geometric error terms. These constraints are fused with a co-visibility graph of relative poses extracted from DSO's sliding window optimization. Our evaluation on publicly available datasets demonstrates that the modified point selection strategy retains the tracking accuracy and robustness, and the integrated pose-graph optimization significantly reduces the accumulated rotation-, translation- and scale-drift, resulting in an overall performance comparable to state-of-the-art feature-based systems, even without global bundle adjustment

    Gaussian process single-index models as emulators for computer experiments

    Full text link
    A single-index model (SIM) provides for parsimonious multi-dimensional nonlinear regression by combining parametric (linear) projection with univariate nonparametric (non-linear) regression models. We show that a particular Gaussian process (GP) formulation is simple to work with and ideal as an emulator for some types of computer experiment as it can outperform the canonical separable GP regression model commonly used in this setting. Our contribution focuses on drastically simplifying, re-interpreting, and then generalizing a recently proposed fully Bayesian GP-SIM combination, and then illustrating its favorable performance on synthetic data and a real-data computer experiment. Two R packages, both released on CRAN, have been augmented to facilitate inference under our proposed model(s).Comment: 23 pages, 9 figures, 1 tabl

    A High-Performance Triple Patterning Layout Decomposer with Balanced Density

    Full text link
    Triple patterning lithography (TPL) has received more and more attentions from industry as one of the leading candidate for 14nm/11nm nodes. In this paper, we propose a high performance layout decomposer for TPL. Density balancing is seamlessly integrated into all key steps in our TPL layout decomposition, including density-balanced semi-definite programming (SDP), density-based mapping, and density-balanced graph simplification. Our new TPL decomposer can obtain high performance even compared to previous state-of-the-art layout decomposers which are not balanced-density aware, e.g., by Yu et al. (ICCAD'11), Fang et al. (DAC'12), and Kuang et al. (DAC'13). Furthermore, the balanced-density version of our decomposer can provide more balanced density which leads to less edge placement error (EPE), while the conflict and stitch numbers are still very comparable to our non-balanced-density baseline

    Projection predictive model selection for Gaussian processes

    Full text link
    We propose a new method for simplification of Gaussian process (GP) models by projecting the information contained in the full encompassing model and selecting a reduced number of variables based on their predictive relevance. Our results on synthetic and real world datasets show that the proposed method improves the assessment of variable relevance compared to the automatic relevance determination (ARD) via the length-scale parameters. We expect the method to be useful for improving explainability of the models, reducing the future measurement costs and reducing the computation time for making new predictions.Comment: A few minor changes in tex

    SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion

    Get PDF
    Active depth cameras suffer from several limitations, which cause incomplete and noisy depth maps, and may consequently affect the performance of RGB-D Odometry. To address this issue, this paper presents a visual odometry method based on point and line features that leverages both measurements from a depth sensor and depth estimates from camera motion. Depth estimates are generated continuously by a probabilistic depth estimation framework for both types of features to compensate for the lack of depth measurements and inaccurate feature depth associations. The framework models explicitly the uncertainty of triangulating depth from both point and line observations to validate and obtain precise estimates. Furthermore, depth measurements are exploited by propagating them through a depth map registration module and using a frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D reprojection errors, independently. Results on RGB-D sequences captured on large indoor and outdoor scenes, where depth sensor limitations are critical, show that the combination of depth measurements and estimates through our approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201

    ADAM: a general method for using various data types in asteroid reconstruction

    Get PDF
    We introduce ADAM, the All-Data Asteroid Modelling algorithm. ADAM is simple and universal since it handles all disk-resolved data types (adaptive optics or other images, interferometry, and range-Doppler radar data) in a uniform manner via the 2D Fourier transform, enabling fast convergence in model optimization. The resolved data can be combined with disk-integrated data (photometry). In the reconstruction process, the difference between each data type is only a few code lines defining the particular generalized projection from 3D onto a 2D image plane. Occultation timings can be included as sparse silhouettes, and thermal infrared data are efficiently handled with an approximate algorithm that is sufficient in practice due to the dominance of the high-contrast (boundary) pixels over the low-contrast (interior) ones. This is of particular importance to the raw ALMA data that can be directly handled by ADAM without having to construct the standard image. We study the reliability of the inversion by using the independent shape supports of function series and control-point surfaces. When other data are lacking, one can carry out fast nonconvex lightcurve-only inversion, but any shape models resulting from it should only be taken as illustrative global-scale ones.Comment: 11 pages, submitted to A&
    • …
    corecore