7 research outputs found

    Crossmodal Attentive Skill Learner

    Full text link
    This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated with the recently-introduced Asynchronous Advantage Option-Critic (A2OC) architecture [Harb et al., 2017] to enable hierarchical reinforcement learning across multiple sensory inputs. We provide concrete examples where the approach not only improves performance in a single task, but accelerates transfer to new tasks. We demonstrate the attention mechanism anticipates and identifies useful latent features, while filtering irrelevant sensor modalities during execution. We modify the Arcade Learning Environment [Bellemare et al., 2013] to support audio queries, and conduct evaluations of crossmodal learning in the Atari 2600 game Amidar. Finally, building on the recent work of Babaeizadeh et al. [2017], we open-source a fast hybrid CPU-GPU implementation of CASL.Comment: International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2018, NIPS 2017 Deep Reinforcement Learning Symposiu

    GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests

    Get PDF
    Autonomous navigation of unmanned vehicles in forests is a challenging task. In such environments, due to the canopies of the trees, information from Global Navigation Satellite Systems (GNSS) can be degraded or even unavailable. Also, because of the large number of obstacles, a previous detailed map of the environment is not practical. In this paper, we solve the complete navigation problem of an aerial robot in a sparse forest, where there is enough space for the flight and the GNSS signals can be sporadically detected. For localization, we propose a state estimator that merges information from GNSS, Attitude and Heading Reference Systems (AHRS), and odometry based on Light Detection and Ranging (LiDAR) sensors. In our LiDAR-based odometry solution, the trunks of the trees are used in a feature-based scan matching algorithm to estimate the relative movement of the vehicle. Our method employs a robust adaptive fusion algorithm based on the unscented Kalman filter. For motion control, we adopt a strategy that integrates a vector field, used to impose the main direction of the movement for the robot, with an optimal probabilistic planner, which is responsible for obstacle avoidance. Experiments with a quadrotor equipped with a planar LiDAR in an actual forest environment is used to illustrate the effectiveness of our approach

    Vision-Aided Navigation for Autonomous Vehicles Using Tracked Feature Points

    Get PDF
    This thesis discusses the evaluation, implementation, and testing of several navigation algorithms and feature extraction algorithms using an inertial measurement unit (IMU) and an image capture device (camera) mounted on a ground robot and a quadrotor UAV. The vision-aided navigation algorithms are implemented on data-collected from sensors on an unmanned ground vehicle and a quadrotor, and the results are validated by comparison with GPS data. The thesis investigates sensor fusion techniques for integrating measured IMU data with information extracted from image processing algorithms in order to provide accurate vehicle state estimation. This image-based information takes the forms of features, such as corners, that are tracked over multiple image frames. An extended Kalman filter (EKF) in implemented to fuse vision and IMU data. The main goal of the work is to provide navigation of mobile robots in GPS-denied environments such as indoor environments, cluttered urban environments, or space environments such as asteroids, other planets or the moon. The experimental results show that combining pose information extracted from IMU readings along with pose information extracted from a vision-based algorithm managed to solve the drift problem that comes from using IMU alone and the scale problem that comes from using a monocular vision-based algorithm alone

    Robust multi-sensor fusion for micro aerial vehicle navigation in GPS-degraded/denied environments

    No full text

    HETEROGENEOUS MULTI-SENSOR FUSION FOR 2D AND 3D POSE ESTIMATION

    Get PDF
    Sensor fusion is a process in which data from different sensors is combined to acquire an output that cannot be obtained from individual sensors. This dissertation first considers a 2D image level real world problem from rail industry and proposes a novel solution using sensor fusion, then proceeds further to the more complicated 3D problem of multi sensor fusion for UAV pose estimation. One of the most important safety-related tasks in the rail industry is an early detection of defective rolling stock components. Railway wheels and wheel bearings are two components prone to damage due to their interactions with the brakes and railway track, which makes them a high priority when rail industry investigates improvements to current detection processes. The main contribution of this dissertation in this area is development of a computer vision method for automatically detecting the defective wheels that can potentially become a replacement for the current manual inspection procedure. The algorithm fuses images taken by wayside thermal and vision cameras and uses the outcome for the wheel defect detection. As a byproduct, the process will also include a method for detecting hot bearings from the same images. We evaluate our algorithm using simulated and real data images from UPRR in North America and it will be shown in this dissertation that using sensor fusion techniques the accuracy of the malfunction detection can be improved. After the 2D application, the more complicated 3D application is addressed. Precise, robust and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and SLAM. Each of different sensors employed to estimate the pose have their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this dissertation, a new approach to 3D pose estimation for a UAV in an unknown GPS-denied environment is presented. The proposed algorithm fuses the data from an IMU, a camera, and a 2D LiDAR to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a 2D LiDAR can only provide pose estimation in its scanning plane and thus it cannot obtain full pose estimation in a 3D environment. A novel method is introduced in this research that enables us to employ a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera. To the best of our knowledge 2D LiDAR has never been employed for 3D localization without a prior map and it is shown in this dissertation that our method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments

    Robotika kolaboratiboa nabigazio autonomoarekin bihurketa prozesuak egiteko entengabeko lanetan

    Get PDF
    Capítulo 6.2 confidencial . -- Tesis completa 190 p. -- Tesis censurada 165 p.Proiektu honek bi prototipo ezberdin jasotzen ditu. Alde batetik, nabigazio autonomoa erabiltzen duen AMR prototipo baten garapena erakutsiko da. Bestetik, Mercedes ¿ Benz barnean landuriko tresneria baten ikerketa aurkeztuko da. AMR-ak plataforma mugikor ahaltsuak dira eta hauek barneko nabigazio autonomoa erabiltzen dute, edozein gune ezagunetik mugiarazteko. Horregatik, Gasteizko Ingeniaritza Eskolak halako plataforma baten diseinua burutzen hasi da, lokalizazio algoritmoak lantzeko. Robot mugikor honek elementu industrialak erabiliko ditu eta hauek inteligentzia garapenean zenbait oztopo ezarriko ditu. AMR - ri robot bat atxiki ahal zaio, horregatik Mercedes ¿ Benz barnean elementu komertzialekin AMR bateri robot kolaboratibo bat ezarri zaio. Garapen honek lan postuen efizientziak lantzeko baliagarria izango da eta horretarako robotak etengabeko lanetan mugiarazi, kokatu eta kalitatezko lana burutu behar du
    corecore