240 research outputs found
3D Vision-based Perception and Modelling techniques for Intelligent Ground Vehicles
In this work the candidate proposes an innovative real-time stereo vision system for intelligent/autonomous ground vehicles able to provide a full and reliable 3D reconstruction of the terrain and the obstacles. The terrain has been computed using rational B-Splines surfaces performed by re-weighted iterative least square fitting and equalization. The cloud of 3D points, generated by the processing of the Disparity Space Image (DSI), is sampled into a 2.5D grid map; then grid points are iteratively fitted into rational B-Splines surfaces with different patterns of control points and degrees, depending on traversability consideration. The obtained surface also represents a segmentation of the initial 3D points into terrain inliers and outliers. As final contribution, a new obstacle detection approach is presented, combined with terrain estimation system, in order to model stationary and moving objects in the most challenging scenarios
Collapsible Cubes: Removing Overhangs from 3D Point Clouds to Build Local Navigable Elevation Maps
Elevation maps offer a compact 2 1/2 dimensional
model of terrain surface for navigation in field mobile robotics. However, building these maps from 3D raw point clouds con- taining overhangs, such as tree canopy or tunnels, can produce useless results. This paper proposes a simple processing of a ground-based point cloud that identifies and removes overhang points that do not constitute an obstacle for navigation while keeping vertical structures such as walls or tree trunks. The procedure uses efficient data structures to collapse unsupported 3D cubes down to the ground. This method has been successfully applied to 3D laser scans taken from a mobile robot in outdoor environments in order to build local elevation maps for navigation. Computation times show an improvement with respect to a previous point-based solution to this problem.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Watch Your Step! Terrain Traversability for Robot Control
Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments
Unifying terrain awareness for the visually impaired through real-time semantic segmentation.
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework
Semantic 3D Grid Maps for Autonomous Driving
Maps play a key role in rapidly developing area of autonomous driving. We
survey the literature for different map representations and find that while the
world is three-dimensional, it is common to rely on 2D map representations in
order to meet real-time constraints. We believe that high levels of situation
awareness require a 3D representation as well as the inclusion of semantic
information. We demonstrate that our recently presented hierarchical 3D grid
mapping framework UFOMap meets the real-time constraints. Furthermore, we show
how it can be used to efficiently support more complex functions such as
calculating the occluded parts of space and accumulating the output from a
semantic segmentation network.Comment: Submitted, accepted and presented at the 25th IEEE International
Conference on Intelligent Transportation Systems (IEEE ITSC 2022
Hierarchical Off-Road Path Planning and Its Validation Using a Scaled Autonomous Car\u27
In the last few years. while a lot of research effort has been spent on autonomous vehicle navigation, primarily focused on on-road vehicles, off-road path planning still presents new challenges. Path planning for an autonomous ground vehicle over a large horizon in an unstructured environment when high-resolution a-priori information is available, is still very much an open problem due to the computations involved. Localization and control of an autonomous vehicle and how the control algorithms interact with the path planner is a complex task. The first part of this research details the development of a path decision support tool for off-road application implementing a novel hierarchical path planning framework and verification in a simulation environment. To mimic real world issues, like communication delay, sensor noise, modeling error, etc., it was important that we validate the framework in a real environment. In the second part of the research, development of a scaled autonomous car as part of a real experimental environment is discussed which provides a compromise between cost as well as implementation complexities compared to a full-scale car. The third part of the research, explains the development of a vehicle-in-loop (VIL) environment with demo examples to illustrate the utility of such a platform. Our proposed path planning algorithm mitigates the challenge of high computational cost to find the optimal path over a large scale high-resolution map. A global path planner runs in a centralized server and uses Dynamic Programming (DP) with coarse information to create an optimal cost grid. A local path planner utilizes Model Predictive Control (MPC), running on-board, using the cost map along with high-resolution information (available via various sensors as well as V2V communication) to generate the local optimal path. Such an approach ensures the MPC follows a global optimal path while being locally optimal. A central server efficiently creates and updates route critical information available via vehicle-to-infrastructure(V2X) communication while using the same to update the prescribed global cost grid. For localization of the scaled car, a three-axis inertial measurement unit (IMU), wheel encoders, a global positioning system (GPS) unit and a mono-camera are mounted. Drift in IMU is one of the major issues which we addressed in this research besides developing a low-level controller which helped in implementing the MPC in a constrained computational environment. Using a camera and tire edge detection algorithm we have developed an online steering angle measurement package as well as a steering angle estimation algorithm to be utilized in case of low computational resources. We wanted to study the impact of connectivity on a fleet of vehicles running in off-road terrain. It is costly as well as time consuming to run all real vehicles. Also some scenarios are difficult to recreate in real but need a simulation environment. So we have developed a vehicle-in-loop (VIL) platform using a VIL simulator, a central server and the real scaled car to combine the advantages of both real and simulation environment. As a demo example to illustrate the utility of VIL platform, we have simulated an animal crossing scenario and analyze how our obstacle avoidance algorithms performs under different conditions. In the future it will help us to analyze the impact of connectivity on platoons moving in off-road terrain. For the vehicle-in-loop environment, we have used JavaScript Object Notation (JSON) data format for information exchange using User Datagram Protocol (UDP) for implementing Vehicle-to-Vehicle (V2V) and MySQL server for Vehicle-to-Infrastructure (V2I) communication
Enhancing 3D Autonomous Navigation Through Obstacle Fields: Homogeneous Localisation and Mapping, with Obstacle-Aware Trajectory Optimisation
Small flying robots have numerous potential applications, from quadrotors for search and rescue, infrastructure inspection and package delivery to free-flying satellites for assistance activities inside a space station. To enable these applications, a key challenge is autonomous navigation in 3D, near obstacles on a power, mass and computation constrained platform. This challenge requires a robot to perform localisation, mapping, dynamics-aware trajectory planning and control. The current state-of-the-art uses separate algorithms for each component. Here, the aim is for a more homogeneous approach in the search for improved efficiencies and capabilities. First, an algorithm is described to perform Simultaneous Localisation And Mapping (SLAM) with physical, 3D map representation that can also be used to represent obstacles for trajectory planning: Non-Uniform Rational B-Spline (NURBS) surfaces. Termed NURBSLAM, this algorithm is shown to combine the typically separate tasks of localisation and obstacle mapping. Second, a trajectory optimisation algorithm is presented that produces dynamically-optimal trajectories with direct consideration of obstacles, providing a middle ground between path planners and trajectory smoothers. Called the Admissible Subspace TRajectory Optimiser (ASTRO), the algorithm can produce trajectories that are easier to track than the state-of-the-art for flight near obstacles, as shown in flight tests with quadrotors. For quadrotors to track trajectories, a critical component is the differential flatness transformation that links position and attitude controllers. Existing singularities in this transformation are analysed, solutions are proposed and are then demonstrated in flight tests. Finally, a combined system of NURBSLAM and ASTRO are brought together and tested against the state-of-the-art in a novel simulation environment to prove the concept that a single 3D representation can be used for localisation, mapping, and planning
Traffic Scene Perception for Automated Driving with Top-View Grid Maps
Ein automatisiertes Fahrzeug muss sichere, sinnvolle und schnelle Entscheidungen auf Basis seiner Umgebung treffen.
Dies benötigt ein genaues und recheneffizientes Modell der Verkehrsumgebung.
Mit diesem Umfeldmodell sollen Messungen verschiedener Sensoren fusioniert, gefiltert und nachfolgenden Teilsysteme als kompakte, aber aussagekräftige Information bereitgestellt werden.
Diese Arbeit befasst sich mit der Modellierung der Verkehrsszene auf Basis von Top-View Grid Maps.
Im Vergleich zu anderen Umfeldmodellen ermöglichen sie eine frühe Fusion von Distanzmessungen aus verschiedenen Quellen mit geringem Rechenaufwand sowie eine explizite Modellierung von Freiraum.
Nach der Vorstellung eines Verfahrens zur Bodenoberflächenschätzung, das die Grundlage der Top-View Modellierung darstellt, werden Methoden zur Belegungs- und Elevationskartierung für Grid Maps auf Basis von mehreren, verrauschten, teilweise widersprüchlichen oder fehlenden Distanzmessungen behandelt.
Auf der resultierenden, sensorunabhängigen Repräsentation werden anschließend Modelle zur Detektion von Verkehrsteilnehmern sowie zur Schätzung von Szenenfluss, Odometrie und Tracking-Merkmalen untersucht.
Untersuchungen auf öffentlich verfügbaren Datensätzen und einem Realfahrzeug zeigen, dass Top-View Grid Maps durch on-board LiDAR Sensorik geschätzt und verlässlich sicherheitskritische Umgebungsinformationen wie Beobachtbarkeit und Befahrbarkeit abgeleitet werden können.
Schließlich werden Verkehrsteilnehmer als orientierte Bounding Boxen mit semantischen Klassen, Geschwindigkeiten und Tracking-Merkmalen aus einem gemeinsamen Modell zur Objektdetektion und Flussschätzung auf Basis der Top-View Grid Maps bestimmt
Teaching a Robot to Drive - A Skill Learning Inspired Approach
Roboter können unser Leben erleichtern, indem sie
für uns unangenehme, oder sogar gefährliche Aufgaben
übernehmen. Um sie effizient einsetzen zu können,
sollten sie autonom, adaptiv und einfach zu instruieren
sein. Traditionelle 'white-box'-Ansätze in der Robotik
basieren auf dem Verständnis des Ingenieurs der
unterliegenden physikalischen Struktur des gegebenen
Problems. Ausgehend von diesem Verständnis kann der
Ingenieur eine mögliche Lösung finden und es in dem
System implementieren. Dieser Ansatz ist sehr mächtig,
aber gleichwohl limitiert. Der wichtigste Nachteil ist,
dass derart erstellte Systeme von vordefiniertem Wissen
abhängen und deswegen jedes neue Verhalten den
gleichen, teuren Entwicklungszyklus benötigt. Im
Gegensatz dazu sind Menschen und einige andere Tiere
nicht auf ihre angeborene Verhalten beschränkt, sondern
können während ihrer Lebenszeit vielzählige weitere
Fähigkeiten erwerben. Zusätzlich scheinen sie dazu kein
detailliertes Wissen über den (physikalische) Ablauf
einer gegebenen Aufgabe zu benötigen. Diese
Eigenschaften sind auch für künstliche Systeme
wünschenswert. Deswegen untersuchen wir in dieser
Dissertation die Hypothese, dass Prinzipien des
menschlichen Fähigkeitslernens zu alternativen Methoden
für adaptive Systemkontrolle führen können. Wir
untersuchen diese Hypothese anhand der Aufgabe des
Autonomen Fahrens, welche ein klassiches Problem der
Systemkontrolle darstellt und die Möglichkeit für
vielfältige Applikationen bietet. Die genaue Aufgabe
ist das Erlernen eines grundlegenden, antizipatorischen
Fahrverhaltens von einem menschlichem Lehrer. Nachdem
wir relevante Aspekte bezüglich des menschlichen
Fähigkeitslernen aufgezeigt haben, und die Begriffe
'interne Modelle' und 'chunking' eingeführt haben,
beschreiben wir die Anwendung dieser auf die gegebene
Aufgabe. Wir realisieren chunking mit Hilfe einer
Datenbank in welcher Beispiele menschlichen
Fahreverhaltens gespeichert werden und mit
Beschreibungen der visuell erfassten
Strassentrajektorie verknüpft werden. Dies wird
zunächst innerhalb einer Laborumgebung mit Hilfe eines
Roboters verwirklicht und später, im Laufe des
Europäischen DRIVSCO Projektes, auf ein echtes Auto
übertragen. Wir untersuchen ausserdem das Erlernen
visueller 'Vorwärtsmodelle', welche zu den internen
Modellen gehören, sowie ihren Effekt auf die
Kontrollperformanz beim Roboter. Das Hauptresultat
dieser interdisziplinären und anwendungsorientierten
Arbeit ist ein System, welches in der Lage ist als
Antwort auf die visuell wahrgenommene
Strassentrajektorie entsprechende Aktionspläne zu
generieren, ohne das dazu metrische Informationen
benötigt werden. Die vorhergesagten Aktionen in der
Laborumgebung sind Lenken und Geschwindigkeit. Für das
echte Auto Lenken und Beschleunigung, wobei die
prediktive Kapazität des Systems für Letzteres
beschränkt ist. D.h. der Roboter lernt autonomes Fahren
von einem menschlichen Lehrer und das Auto lernt die
Vorhersage menschlichen Fahrverhaltens. Letzteres wurde
während der Begutachtung des Projektes duch ein
internationales Expertenteam erfolgreich demonstriert.
Das Ergebnis dieser Arbeit ist relevant für Anwendungen
in der Roboterkontrolle und dabei besonders in dem
Bereich intelligenter Fahrerassistenzsysteme
- …