15,898 research outputs found

    Reliable fusion of ToF and stereo depth driven by confidence measures

    Get PDF
    In this paper we propose a framework for the fusion of depth data produced by a Time-of-Flight (ToF) camera and stereo vision system. Initially, depth data acquired by the ToF camera are upsampled by an ad-hoc algorithm based on image segmentation and bilateral filtering. In parallel a dense disparity map is obtained using the Semi- Global Matching stereo algorithm. Reliable confidence measures are extracted for both the ToF and stereo depth data. In particular, ToF confidence also accounts for the mixed-pixel effect and the stereo confidence accounts for the relationship between the pointwise matching costs and the cost obtained by the semi-global optimization. Finally, the two depth maps are synergically fused by enforcing the local consistency of depth data accounting for the confidence of the two data sources at each location. Experimental results clearly show that the proposed method produces accurate high resolution depth maps and outperforms the compared fusion algorithms

    A dynamic two-dimensional (D2D) weight-based map-matching algorithm

    Get PDF
    Existing map-Matching (MM) algorithms primarily localize positioning fixes along the centerline of a road and have largely ignored road width as an input. Consequently, vehicle lane-level localization, which is essential for stringent Intelligent Transport System (ITS) applications, seems difficult to accomplish, especially with the positioning data from low-cost GPS sensors. This paper aims to address this limitation by developing a new dynamic two-dimensional (D2D) weight-based MM algorithm incorporating dynamic weight coefficients and road width. To enable vehicle lane-level localization, a road segment is virtually expressed as a matrix of homogeneous grids with reference to a road centerline. These grids are then used to map-match positioning fixes as opposed to matching on a road centerline as carried out in traditional MM algorithms. In this developed algorithm, vehicle location identification on a road segment is based on the total weight score which is a function of four different weights: (i) proximity, (ii) kinematic, (iii) turn-intent prediction, and (iv) connectivity. Different parameters representing network complexity and positioning quality are used to assign the relative importance to different weight scores by employing an adaptive regression method. To demonstrate the transferability of the developed algorithm, it was tested by using 5,830 GPS positioning points collected in Nottingham, UK and 7,414 GPS positioning points collected in Mumbai and Pune, India. The developed algorithm, using stand-alone GPS position fixes, identifies the correct links 96.1% (for the Nottingham data) and 98.4% (for the Mumbai-Pune data) of the time. In terms of the correct lane identification, the algorithm was found to provide the accurate matching for 84% (Nottingham) and 79% (Mumbai-Pune) of the fixes obtained by stand-alone GPS. Using the same methodology adopted in this study, the accuracy of the lane identification could further be enhanced if the localization data from additional sensors (e.g. gyroscope) are utilized. ITS industry and vehicle manufacturers can implement this D2D map-matching algorithm for liability critical and in-vehicle information systems and services such as advanced driver assistant systems (ADAS)

    Socially Aware Motion Planning with Deep Reinforcement Learning

    Full text link
    For robotic vehicles to navigate safely and efficiently in pedestrian-rich environments, it is important to model subtle human behaviors and navigation rules (e.g., passing on the right). However, while instinctive to humans, socially compliant navigation is still difficult to quantify due to the stochasticity in people's behaviors. Existing works are mostly focused on using feature-matching techniques to describe and imitate human paths, but often do not generalize well since the feature values can vary from person to person, and even run to run. This work notes that while it is challenging to directly specify the details of what to do (precise mechanisms of human navigation), it is straightforward to specify what not to do (violations of social norms). Specifically, using deep reinforcement learning, this work develops a time-efficient navigation policy that respects common social norms. The proposed method is shown to enable fully autonomous navigation of a robotic vehicle moving at human walking speed in an environment with many pedestrians.Comment: 8 page

    Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations

    Full text link
    Size, weight, and power constrained platforms impose constraints on computational resources that introduce unique challenges in implementing localization algorithms. We present a framework to perform fast localization on such platforms enabled by the compressive capabilities of Gaussian Mixture Model representations of point cloud data. Given raw structural data from a depth sensor and pitch and roll estimates from an on-board attitude reference system, a multi-hypothesis particle filter localizes the vehicle by exploiting the likelihood of the data originating from the mixture model. We demonstrate analysis of this likelihood in the vicinity of the ground truth pose and detail its utilization in a particle filter-based vehicle localization strategy, and later present results of real-time implementations on a desktop system and an off-the-shelf embedded platform that outperform localization results from running a state-of-the-art algorithm on the same environment

    PURIFY: a new approach to radio-interferometric imaging

    Get PDF
    In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a minimization problem for image reconstruction. This approach was shown, in theory and through simulations in a simple discrete visibility setting, to have the potential to outperform significantly CLEAN and its evolutions. In this work, we leverage the versatility of convex optimization in solving minimization problems to both handle realistic continuous visibilities and offer a highly parallelizable structure paving the way to significant acceleration of the reconstruction and high-dimensional data scalability. The new algorithmic structure promoted relies on the simultaneous-direction method of multipliers (SDMM), and contrasts with the current major-minor cycle structure of CLEAN and its evolutions, which in particular cannot handle the state-of-the-art minimization problems under consideration where neither the regularization term nor the data term are differentiable functions. We release a beta version of an SDMM-based imaging software written in C and dubbed PURIFY (http://basp-group.github.io/purify/) that handles various sparsity priors, including our recent average sparsity approach SARA. We evaluate the performance of different priors through simulations in the continuous visibility setting, confirming the superiority of SARA

    Stereo and ToF Data Fusion by Learning from Synthetic Data

    Get PDF
    Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods
    corecore