37 research outputs found

    Probably Unknown: Deep Inverse Sensor Modelling In Radar

    Full text link
    Radar presents a promising alternative to lidar and vision in autonomous vehicle applications, able to detect objects at long range under a variety of weather conditions. However, distinguishing between occupied and free space from raw radar power returns is challenging due to complex interactions between sensor noise and occlusion. To counter this we propose to learn an Inverse Sensor Model (ISM) converting a raw radar scan to a grid map of occupancy probabilities using a deep neural network. Our network is self-supervised using partial occupancy labels generated by lidar, allowing a robot to learn about world occupancy from past experience without human supervision. We evaluate our approach on five hours of data recorded in a dynamic urban environment. By accounting for the scene context of each grid cell our model is able to successfully segment the world into occupied and free space, outperforming standard CFAR filtering approaches. Additionally by incorporating heteroscedastic uncertainty into our model formulation, we are able to quantify the variance in the uncertainty throughout the sensor observation. Through this mechanism we are able to successfully identify regions of space that are likely to be occluded.Comment: 6 full pages, 1 page of reference

    Collaborative information sensemaking for multi-robot search and rescue

    Get PDF
    In this paper, we consider novel information sensemaking methods for search and rescue operations that combine principles of information fusion and collective intelligence in scalable solutions. We will elaborate on several approaches that originated in different areas of information integration, sensor data management, and multi-robot urban search and rescue missions

    Real-Time Power-Efficient Integration of Multi-Sensor Occupancy Grid on Many-Core

    Get PDF
    International audienceSafe Autonomous Vehicles (AVs) will emerge when comprehensive perception systems will be successfully integrated into vehicles. Advanced perception algorithms, estimating the position and speed of every obstacle in the environment by using data fusion from multiple sensors, were developed for AV prototypes. Computational requirements of such application prevent their integration into AVs on current low-power embedded hardware. However, recent emerging many-core architectures offer opportunities to fulfill the automotive market constraints and efficiently support advanced perception applications. This paper, explores the integration of the occupancy grid multi-sensor fusion algorithm into low power many-core architectures. The parallel properties of this function are used to achieve real-time performance at low-power consumption. The proposed implementation achieves an execution time of 6.26ms, 6× faster than typical sensor output rates and 9× faster than previous embedded prototypes

    Combining Occupancy Grids with a Polygonal Obstacle World Model for Autonomous Flights

    Get PDF
    This chapter presents a mapping process that can be applied to autonomous systems for obstacle avoidance and trajectory planning. It is an improvement over commonly applied obstacle mapping techniques, such as occupancy grids. Problems encountered in large outdoor scenarios are tackled and a compressed map that can be sent on low-bandwidth networks is produced. The approach is real-time capable and works in full 3-D environments. The efficiency of the proposed approach is demonstrated under real operational conditions on an unmanned aerial vehicle using stereo vision for distance measurement

    Eksperimentalna usporedba metoda izgradnje mrežastih karata prostora korištenjem ultrazvučnih senzora

    Get PDF
    For successful usage of mobile robots in human working areas several navigation problems have to be solved. One of the navigational problems is the creation and update of the model or map of a mobile robot working environment. This article describes most used types of the occupancy grid maps based sonar range readings. These maps are: (i) Bayesian map, (ii) Dempster-Shafer map, (iii) Fuzzy map, (iv) Borenstein map, (v) MURIEL map, and (vi) TBF map. Besides the maps description, a memory consumption and computation time comparison is done. Simulation validation is done using the AMORsim mobile robot simulator for Matlab and experimental validation is done using a Pioneer 3DX mobile robot. Obtained results are presented and compared regarding resulting map quality.Za uspješnu primjenu mobilnih robota u radnim prostorima s ljudima potrebno je riješiti različite probleme navigacije. Jedan od problema navigacije jest kreiranje modela i uključivanje novih informacija o radnoj okolini mobilnog robota u model radne okoline ili kartu. Članak opisuje često korištene tipove mrežastih karata prostora zasnovanih na očitanjima ultrazvučnih osjetila udaljenosti. Obrađeni modeli prostora su: (i) Bayesova karta, (ii) Dempster-Shaferova karta, (iii) neizrazita karta, (iv) Borensteinova karta, (v) MURIEL karta i (vi) TBF karta. Osim opisa, u članku je dana i usporedba implementiranih algoritama prema memorijskim i računskim zahtjevima. Simulacijska provjera napravljena je korištenjem AMORsim simulatora mobilnog robota za programski paket Matlab, a eksperimentalna provjera napravljena je korištenjem Pioneer 3DX mobilnog robota. Također su prikazani dobiveni rezultati uz usporedbu njihove kakvoće

    Occupancy grids from stereo and optical flow data

    Get PDF
    voir basilic : http://emotion.inrialpes.fr/bibemotion/2006/BPUCL06/ address: Rio de Janeiro (BR)In this paper, we propose a real-time method to detect obstacles using theoretical models of the ground plane, first in a 3D point cloud given by a stereo camera, and then in an optical flow field given by one of the stereo pair's camera. The idea of our method is to combine two partial occupancy grids from both sensor modalities with an occupancy grid framework. The two methods do not have the same range, precision and resolution. For example, the stereo method is precise for close objects but cannot see further than 7 m (with our lenses), while the optical flow method can see considerably further but has lower accuracy. Experiments that have been carried on the CyCab mobile robot and on a tractor demonstrate that we can combine the advantages of both algorithms to build local occupancy grids from incomplete data (optical flow from a monocular camera cannot give depth information without time integration)

    Mapping multiple gas/odor sources in an uncontrolled indoor environment using a Bayesian occupancy grid mapping based method

    Get PDF
    Author Posting. © The Author(s), 2011. This is the author's version of the work. It is posted here by permission of Elsevier B.V. for personal use, not for redistribution. The definitive version was published in Robotics and Autonomous Systems 59 (2011): 988–1000, doi:10.1016/j.robot.2011.06.007.In this paper we address the problem of autonomously localizing multiple gas/odor sources in an indoor environment without a strong airflow. To do this, a robot iteratively creates an occupancy grid map. The produced map shows the probability each discrete cell contains a source. Our approach is based on a recent adaptation [15] to traditional Bayesian occupancy grid mapping for chemical source localization problems. The approach is less sensitive, in the considered scenario, to the choice of the algorithm parameters. We present experimental results with a robot in an indoor uncontrolled corridor in the presence of different ejecting sources proving the method is able to build reliable maps quickly (5.5 minutes in a 6 m x 2.1 m area) and in real time
    corecore