26,733 research outputs found

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    Kpc-scale Properties of Emission-line Galaxies

    Get PDF
    We perform a detailed study of the resolved properties of emission-line galaxies at kpc-scale to investigate how small-scale and global properties of galaxies are related. 119 galaxies with high-resolution Keck/DEIMOS spectra are selected to cover a wide range in morphologies over the redshift range 0.2<z<1.3. Using the HST/ACS and HST/WFC3 imaging data taken as a part of the CANDELS project, for each galaxy we perform SED fitting per resolution element, producing resolved rest-frame U-V color, stellar mass, star formation rate, age and extinction maps. We develop a technique to identify blue and red "regions" within individual galaxies, using their rest-frame color maps. As expected, for any given galaxy, the red regions are found to have higher stellar mass surface densities and older ages compared to the blue regions. Furthermore, we quantify the spatial distribution of red and blue regions with respect to both redshift and stellar mass, finding that the stronger concentration of red regions toward the centers of galaxies is not a significant function of either redshift or stellar mass. We find that the "main sequence" of star forming galaxies exists among both red and blue regions inside galaxies, with the median of blue regions forming a tighter relation with a slope of 1.1+/-0.1 and a scatter of ~0.2 dex compared to red regions with a slope of 1.3+/-0.1 and a scatter of ~0.6 dex. The blue regions show higher specific Star Formation Rates (sSFR) than their red counterparts with the sSFR decreasing since z~1, driver primarily by the stellar mass surface densities rather than the SFRs at a giver resolution element.Comment: 17 pages, 17 figures, Submitted to the Ap

    Sparsity Invariant CNNs

    Full text link
    In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments with respect to various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings and will be made available upon publication

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Motion adaptation and attention: A critical review and meta-analysis

    Get PDF
    The motion aftereffect (MAE) provides a behavioural probe into the mechanisms underlying motion perception, and has been used to study the effects of attention on motion processing. Visual attention can enhance detection and discrimination of selected visual signals. However, the relationship between attention and motion processing remains contentious: not all studies find that attention increases MAEs. Our meta-analysis reveals several factors that explain superficially discrepant findings. Across studies (37 independent samples, 76 effects) motion adaptation was significantly and substantially enhanced by attention (Cohen's d = 1.12, p < .0001). The effect more than doubled when adapting to translating (vs. expanding or rotating) motion. Other factors affecting the attention-MAE relationship included stimulus size, eccentricity and speed. By considering these behavioural analyses alongside neurophysiological work, we conclude that feature-based (rather than spatial, or object-based) attention is the biggest driver of sensory adaptation. Comparisons between naïve and non-naïve observers, different response paradigms, and assessment of 'file-drawer effects' indicate that neither response bias nor publication bias are likely to have significantly inflated the estimated effect of attention
    • …
    corecore