238 research outputs found

    Lidar-based scene understanding for autonomous driving using deep learning

    Get PDF
    With over 1.35 million fatalities related to traffic accidents worldwide, autonomous driving was foreseen at the beginning of this century as a feasible solution to improve security in our roads. Nevertheless, it is meant to disrupt our transportation paradigm, allowing to reduce congestion, pollution, and costs, while increasing the accessibility, efficiency, and reliability of the transportation for both people and goods. Although some advances have gradually been transferred into commercial vehicles in the way of Advanced Driving Assistance Systems (ADAS) such as adaptive cruise control, blind spot detection or automatic parking, however, the technology is far from mature. A full understanding of the scene is actually needed so that allowing the vehicles to be aware of the surroundings, knowing the existing elements of the scene, as well as their motion, intentions and interactions. In this PhD dissertation, we explore new approaches for understanding driving scenes from 3D LiDAR point clouds by using Deep Learning methods. To this end, in Part I we analyze the scene from a static perspective using independent frames to detect the neighboring vehicles. Next, in Part II we develop new ways for understanding the dynamics of the scene. Finally, in Part III we apply all the developed methods to accomplish higher level challenges such as segmenting moving obstacles while obtaining their rigid motion vector over the ground. More specifically, in Chapter 2 we develop a 3D vehicle detection pipeline based on a multi-branch deep-learning architecture and propose a Front (FR-V) and a Bird’s Eye view (BE-V) as 2D representations of the 3D point cloud to serve as input for training our models. Later on, in Chapter 3 we apply and further test this method on two real uses-cases, for pre-filtering moving obstacles while creating maps to better localize ourselves on subsequent days, as well as for vehicle tracking. From the dynamic perspective, in Chapter 4 we learn from the 3D point cloud a novel dynamic feature that resembles optical flow from RGB images. For that, we develop a new approach to leverage RGB optical flow as pseudo ground truth for training purposes but allowing the use of only 3D LiDAR data at inference time. Additionally, in Chapter 5 we explore the benefits of combining classification and regression learning problems to face the optical flow estimation task in a joint coarse-and-fine manner. Lastly, in Chapter 6 we gather the previous methods and demonstrate that with these independent tasks we can guide the learning of higher challenging problems such as segmentation and motion estimation of moving vehicles from our own moving perspective.Con más de 1,35 millones de muertes por accidentes de tráfico en el mundo, a principios de siglo se predijo que la conducción autónoma sería una solución viable para mejorar la seguridad en nuestras carreteras. Además la conducción autónoma está destinada a cambiar nuestros paradigmas de transporte, permitiendo reducir la congestión del tráfico, la contaminación y el coste, a la vez que aumentando la accesibilidad, la eficiencia y confiabilidad del transporte tanto de personas como de mercancías. Aunque algunos avances, como el control de crucero adaptativo, la detección de puntos ciegos o el estacionamiento automático, se han transferido gradualmente a vehículos comerciales en la forma de los Sistemas Avanzados de Asistencia a la Conducción (ADAS), la tecnología aún no ha alcanzado el suficiente grado de madurez. Se necesita una comprensión completa de la escena para que los vehículos puedan entender el entorno, detectando los elementos presentes, así como su movimiento, intenciones e interacciones. En la presente tesis doctoral, exploramos nuevos enfoques para comprender escenarios de conducción utilizando nubes de puntos en 3D capturadas con sensores LiDAR, para lo cual empleamos métodos de aprendizaje profundo. Con este fin, en la Parte I analizamos la escena desde una perspectiva estática para detectar vehículos. A continuación, en la Parte II, desarrollamos nuevas formas de entender las dinámicas del entorno. Finalmente, en la Parte III aplicamos los métodos previamente desarrollados para lograr desafíos de nivel superior, como segmentar obstáculos dinámicos a la vez que estimamos su vector de movimiento sobre el suelo. Específicamente, en el Capítulo 2 detectamos vehículos en 3D creando una arquitectura de aprendizaje profundo de dos ramas y proponemos una vista frontal (FR-V) y una vista de pájaro (BE-V) como representaciones 2D de la nube de puntos 3D que sirven como entrada para entrenar nuestros modelos. Más adelante, en el Capítulo 3 aplicamos y probamos aún más este método en dos casos de uso reales, tanto para filtrar obstáculos en movimiento previamente a la creación de mapas sobre los que poder localizarnos mejor en los días posteriores, como para el seguimiento de vehículos. Desde la perspectiva dinámica, en el Capítulo 4 aprendemos de la nube de puntos en 3D una característica dinámica novedosa que se asemeja al flujo óptico sobre imágenes RGB. Para ello, desarrollamos un nuevo enfoque que aprovecha el flujo óptico RGB como pseudo muestras reales para entrenamiento, usando solo information 3D durante la inferencia. Además, en el Capítulo 5 exploramos los beneficios de combinar los aprendizajes de problemas de clasificación y regresión para la tarea de estimación de flujo óptico de manera conjunta. Por último, en el Capítulo 6 reunimos los métodos anteriores y demostramos que con estas tareas independientes podemos guiar el aprendizaje de problemas de más alto nivel, como la segmentación y estimación del movimiento de vehículos desde nuestra propia perspectivaAmb més d’1,35 milions de morts per accidents de trànsit al món, a principis de segle es va predir que la conducció autònoma es convertiria en una solució viable per millorar la seguretat a les nostres carreteres. D’altra banda, la conducció autònoma està destinada a canviar els paradigmes del transport, fent possible així reduir la densitat del trànsit, la contaminació i el cost, alhora que augmentant l’accessibilitat, l’eficiència i la confiança del transport tant de persones com de mercaderies. Encara que alguns avenços, com el control de creuer adaptatiu, la detecció de punts cecs o l’estacionament automàtic, s’han transferit gradualment a vehicles comercials en forma de Sistemes Avançats d’Assistència a la Conducció (ADAS), la tecnologia encara no ha arribat a aconseguir el grau suficient de maduresa. És necessària, doncs, una total comprensió de l’escena de manera que els vehicles puguin entendre l’entorn, detectant els elements presents, així com el seu moviment, intencions i interaccions. A la present tesi doctoral, explorem nous enfocaments per tal de comprendre les diferents escenes de conducció utilitzant núvols de punts en 3D capturats amb sensors LiDAR, mitjançant l’ús de mètodes d’aprenentatge profund. Amb aquest objectiu, a la Part I analitzem l’escena des d’una perspectiva estàtica per a detectar vehicles. A continuació, a la Part II, desenvolupem noves formes d’entendre les dinàmiques de l’entorn. Finalment, a la Part III apliquem els mètodes prèviament desenvolupats per a aconseguir desafiaments d’un nivell superior, com, per exemple, segmentar obstacles dinàmics al mateix temps que estimem el seu vector de moviment respecte al terra. Concretament, al Capítol 2 detectem vehicles en 3D creant una arquitectura d’aprenentatge profund amb dues branques, i proposem una vista frontal (FR-V) i una vista d’ocell (BE-V) com a representacions 2D del núvol de punts 3D que serveixen com a punt de partida per entrenar els nostres models. Més endavant, al Capítol 3 apliquem i provem de nou aquest mètode en dos casos d’ús reals, tant per filtrar obstacles en moviment prèviament a la creació de mapes en els quals poder localitzar-nos millor en dies posteriors, com per dur a terme el seguiment de vehicles. Des de la perspectiva dinàmica, al Capítol 4 aprenem una nova característica dinàmica del núvol de punts en 3D que s’assembla al flux òptic sobre imatges RGB. Per a fer-ho, desenvolupem un nou enfocament que aprofita el flux òptic RGB com pseudo mostres reals per a entrenament, utilitzant només informació 3D durant la inferència. Després, al Capítol 5 explorem els beneficis que s’obtenen de combinar els aprenentatges de problemes de classificació i regressió per la tasca d’estimació de flux òptic de manera conjunta. Finalment, al Capítol 6 posem en comú els mètodes anteriors i demostrem que mitjançant aquests processos independents podem abordar l’aprenentatge de problemes més complexos, com la segmentació i estimació del moviment de vehicles des de la nostra pròpia perspectiva

    자율주행을 위한 정지 장애물 맵과 GMFT 융합 기반 이동 물체 탐지 및 추적

    Get PDF
    학위논문(석사)--서울대학교 대학원 :공과대학 기계항공공학부,2019. 8. 이경수.Based on the high accuracy of LiDAR sensor, detection and tracking of moving objects(DATMO) have been advanced as an important branch of perception for an autonomous vehicle. However, due to crowded road circumstances by various kind of vehicles and geographical features, it is necessary to reduce clustering fail case and decrease the computational burden. To overcome these difficulties, this paper proposed a novel approach by integrating DATMO and mapping algorithm. Since the DATMO and mapping are specialized to estimate moving object and static map respectively, these two algorithms can improve their estimation by using each others output. Whole perception algorithm is reconstructed using feedback loop structure includes DATMO and mapping algorithm. Moreover, mapping algorithm and DATMO are revised to innovative Bayesian rule-based Static Obstacle Map(SOM) and Geometric Model-Free Tracking(GMFT) to use each others output as the measurements of filtering process. The proposed study is evaluated via driving dataset collected by vehicles with RTK DGPS, RT-range and 2D LiDAR. Several typical clustering fail cases that had been observed in existing DATMO approach are reduced and code operation time over the whole perception process is decreased. Especially, estimation of moving vehicles state include position, velocity, and yaw angle show less error with references which are measured by RT-range.라이다 센서의 측정 정밀성을 기반으로 하여 DATMO, 즉 이동 물체 탐지 및 추적은 자율주행 인지 분야의 매우 중요한 주제로 발전되어 왔다. 그러나 다양한 종류의 차량에 의해 도로 상황이 복잡한 점 및 도로 특유의 복잡한 지형적 특성 때문에 클러스터링(Clustering)의 실패 사례가 종종 발생할 뿐만 아니라 인지 알고리즘의 계산 부담도 증가한다. 이러한 문제를 극복하기 위해 이 논문에서는 DATMO 알고리즘과 맵핑 알고리즘을 통합하여 새로운 접근법을 제시하였다. DATMO와 맵핑 알고리즘은 각각 이동 물체와 정지 물체의 상태를 추정하는데에 특화되어있기 때문에 두 알고리즘은 서로의 출력을 입력으로 사용하여 추정 성능을 향상시킬 수 있다. 전체 인지 알고리즘은 DATMO와 맵핑 알고리즘을 포함하는 피드백 루프 구조로 재구성된다. 또한 두 알고리즘은 각각 Geometric Model-Free Tracking(GMFT)과 베이지안 룰 기반의 혁신적인 Static Obstacle Map(SOM)으로 수정되어 서로의 출력을 필터링 프로세스의 측정값으로 사용한다. 이 연구에서 제시한 통합 인지 알고리즘은 RTK DGPS와 RT Range 장비, 그리고 2차원 LiDAR를 장착한 차량을 이용하여 수집한 데이터를 통해 성능을 평가하였다. 기존의 DATMO 연구에서 발생했던 몇 가지 일반적인 클러스터링 실패 사례가 감소하였고 전체 통합 인지 과정에 대한 알고리즘 작동 시간이 감소함을 확인하였다. 특히, 이동하는 물체의 위치, 속도, 방향을 추정한 결과는 RT Range 장비로 측정한 실제 값과 기존 방식 대비 더욱 적은 오차를 보여주었다.Chapter 1 Introduction 1 Chapter 2 Interaction of Mapping and DATMO 5 Chapter 3 Mapping – Static Obstacle Map 9 3.1 Prediction of SOM 11 3.2 Measurement update of SOM 14 Chapter 4 DATMO – Geometric Model-Free Tracking 16 4.1 Prediction of target state 18 4.2 Track management 19 4.3 Measurement update of target state 21 Chapter 5 Experimental Results 23 5.1 Vehicles and sensors configuration 24 5.2 Detection rate of moving object 27 5.3 State estimation accuracy of moving object 31 5.4 Code operation time 34 Chapter 6 Conclusion and Future Work 36 6.1 Conclusion 36 6.2 Future works 37 Bibliography 39 초 록 43Maste

    Autonomous Campus Mobility Platform

    Get PDF
    This Major Qualifying Project (MQP) is based around the development of a robotic vehicle for use in improving mobility. The main objective was to create an autonomous vehicle capable of navigating a person or cargo back and forth from Higgins Laboratory on the Worcester Polytechnic Institute (WPI) main campus to the Robotics Laboratory located at 85 Prescott Street, approximately 0.6 miles away. An autonomous robot was uniquely designed as a personal mobility platform to navigate its environment using onboard navigation and sensing system

    Social robot navigation tasks: combining machine learning techniques and social force model

    Get PDF
    © 2021 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/)Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots have to navigate in such a way that people feel safe and comfortable. In this work, we present two navigation tasks, social robot navigation and robot accompaniment, which combine machine learning techniques with the Social Force Model (SFM) allowing human-aware social navigation. The robots in both approaches use data from different sensors to capture the environment knowledge as well as information from pedestrian motion. The two navigation tasks make use of the SFM, which is a general framework in which human motion behaviors can be expressed through a set of functions depending on the pedestrians’ relative and absolute positions and velocities. Additionally, in both social navigation tasks, the robot’s motion behavior is learned using machine learning techniques: in the first case using supervised deep learning techniques and, in the second case, using Reinforcement Learning (RL). The machine learning techniques are combined with the SFM to create navigation models that behave in a social manner when the robot is navigating in an environment with pedestrians or accompanying a person. The validation of the systems was performed with a large set of simulations and real-life experiments with a new humanoid robot denominated IVO and with an aerial robot. The experiments show that the combination of SFM and machine learning can solve human-aware robot navigation in complex dynamic environments.This research was supported by the grant MDM-2016-0656 funded by MCIN/AEI / 10.13039/501100011033, the grant ROCOTRANSP PID2019-106702RB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and the grant CANOPIES H2020-ICT-2020-2-101016906 funded by the European Union.Peer ReviewedPostprint (published version

    Milestones in autonomous driving and intelligent vehicles: survey of surveys

    Get PDF
    Interest in autonomous driving (AD) and intelligent vehicles (IVs) is growing at a rapid pace due to the convenience, safety, and economic benefits. Although a number of surveys have reviewed research achievements in this field, they are still limited in specific tasks, lack of systematic summary and research directions in the future. Here we propose a Survey of Surveys (SoS) for total technologies of AD and IVs that reviews the history, summarizes the milestones, and provides the perspectives, ethics, and future research directions. To our knowledge, this article is the first SoS with milestones in AD and IVs, which constitutes our complete research work together with two other technical surveys. We anticipate that this article will bring novel and diverse insights to researchers and abecedarians, and serve as a bridge between past and future

    Robust Multi-sensor Data Fusion for Practical Unmanned Surface Vehicles (USVs) Navigation

    Get PDF
    The development of practical Unmanned Surface Vehicles (USVs) are attracting increasing attention driven by their assorted military and commercial application potential. However, addressing the uncertainties presented in practical navigational sensor measurements of an USV in maritime environment remain the main challenge of the development. This research aims to develop a multi-sensor data fusion system to autonomously provide an USV reliable navigational information on its own positions and headings as well as to detect dynamic target ships in the surrounding environment in a holistic fashion. A multi-sensor data fusion algorithm based on Unscented Kalman Filter (UKF) has been developed to generate more accurate estimations of USV’s navigational data considering practical environmental disturbances. A novel covariance matching adaptive estimation algorithm has been proposed to deal with the issues caused by unknown and varying sensor noise in practice to improve system robustness. Certain measures have been designed to determine the system reliability numerically, to recover USV trajectory during short term sensor signal loss, and to autonomously detect and discard permanently malfunctioned sensors, and thereby enabling potential sensor faults tolerance. The performance of the algorithms have been assessed by carrying out theoretical simulations as well as using experimental data collected from a real-world USV projected collaborated with Plymouth University. To increase the degree of autonomy of USVs in perceiving surrounding environments, target detection and prediction algorithms using an Automatic Identification System (AIS) in conjunction with a marine radar have been proposed to provide full detections of multiple dynamic targets in a wider coverage range, remedying the narrow detection range and sensor uncertainties of the AIS. The detection algorithms have been validated in simulations using practical environments with water current effects. The performance of developed multi-senor data fusion system in providing reliable navigational data and perceiving surrounding environment for USV navigation have been comprehensively demonstrated

    NASA Automated Rendezvous and Capture Review. A compilation of the abstracts

    Get PDF
    This document presents a compilation of abstracts of papers solicited for presentation at the NASA Automated Rendezvous and Capture Review held in Williamsburg, VA on November 19-21, 1991. Due to limitations on time and other considerations, not all abstracts could be presented during the review. The organizing committee determined however, that all abstracts merited availability to all participants and represented data and information reflecting state-of-the-art of this technology which should be captured in one document for future use and reference. The organizing committee appreciates the interest shown in the review and the response by the authors in submitting these abstracts

    George C. Marshall Space Flight Center Research and Technology Report 2014

    Get PDF
    Many of NASA's missions would not be possible if it were not for the investments made in research advancements and technology development efforts. The technologies developed at Marshall Space Flight Center contribute to NASA's strategic array of missions through technology development and accomplishments. The scientists, researchers, and technologists of Marshall Space Flight Center who are working these enabling technology efforts are facilitating NASA's ability to fulfill the ambitious goals of innovation, exploration, and discovery

    Vision-Based Control of Unmanned Aerial Vehicles for Automated Structural Monitoring and Geo-Structural Analysis of Civil Infrastructure Systems

    Full text link
    The emergence of wireless sensors capable of sensing, embedded computing, and wireless communication has provided an affordable means of monitoring large-scale civil infrastructure systems with ease. To date, the majority of the existing monitoring systems, including those based on wireless sensors, are stationary with measurement nodes installed without an intention for relocation later. Many monitoring applications involving structural and geotechnical systems require a high density of sensors to provide sufficient spatial resolution to their assessment of system performance. While wireless sensors have made high density monitoring systems possible, an alternative approach would be to empower the mobility of the sensors themselves to transform wireless sensor networks (WSNs) into mobile sensor networks (MSNs). In doing so, many benefits would be derived including reducing the total number of sensors needed while introducing the ability to learn from the data obtained to improve the location of sensors installed. One approach to achieving MSNs is to integrate the use of unmanned aerial vehicles (UAVs) into the monitoring application. UAV-based MSNs have the potential to transform current monitoring practices by improving the speed and quality of data collected while reducing overall system costs. The efforts of this study have been chiefly focused upon using autonomous UAVs to deploy, operate, and reconfigure MSNs in a fully autonomous manner for field monitoring of civil infrastructure systems. This study aims to overcome two main challenges pertaining to UAV-enabled wireless monitoring: the need for high-precision localization methods for outdoor UAV navigation and facilitating modes of direct interaction between UAVs and their built or natural environments. A vision-aided UAV positioning algorithm is first introduced to augment traditional inertial sensing techniques to enhance the ability of UAVs to accurately localize themselves in a civil infrastructure system for placement of wireless sensors. Multi-resolution fiducial markers indicating sensor placement locations are applied to the surface of a structure, serving as navigation guides and precision landing targets for a UAV carrying a wireless sensor. Visual-inertial fusion is implemented via a discrete-time Kalman filter to further increase the robustness of the relative position estimation algorithm resulting in localization accuracies of 10 cm or smaller. The precision landing of UAVs that allows the MSN topology change is validated on a simple beam with the UAV-based MSN collecting ambient response data for extraction of global mode shapes of the structure. The work also explores the integration of a magnetic gripper with a UAV to drop defined weights from an elevation to provide a high energy seismic source for MSNs engaged in seismic monitoring applications. Leveraging tailored visual detection and precise position control techniques for UAVs, the work illustrates the ability of UAVs to—in a repeated and autonomous fashion—deploy wireless geophones and to introduce an impulsive seismic source for in situ shear wave velocity profiling using the spectral analysis of surface waves (SASW) method. The dispersion curve of the shear wave profile of the geotechnical system is shown nearly equal between the autonomous UAV-based MSN architecture and that taken by a traditional wired and manually operated SASW data collection system. The developments and proof-of-concept systems advanced in this study will extend the body of knowledge of robot-deployed MSN with the hope of extending the capabilities of monitoring systems while eradicating the need for human interventions in their design and use.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169980/1/zhh_1.pd

    Proceedings of the International Micro Air Vehicles Conference and Flight Competition 2017 (IMAV 2017)

    Get PDF
    The IMAV 2017 conference has been held at ISAE-SUPAERO, Toulouse, France from Sept. 18 to Sept. 21, 2017. More than 250 participants coming from 30 different countries worldwide have presented their latest research activities in the field of drones. 38 papers have been presented during the conference including various topics such as Aerodynamics, Aeroacoustics, Propulsion, Autopilots, Sensors, Communication systems, Mission planning techniques, Artificial Intelligence, Human-machine cooperation as applied to drones
    corecore