1,301 research outputs found
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Behavioural strategy for indoor mobile robot navigation in dynamic environments
PhD ThesisDevelopment of behavioural strategies for indoor mobile navigation has become a challenging
and practical issue in a cluttered indoor environment, such as a hospital or factory, where
there are many static and moving objects, including humans and other robots, all of which
trying to complete their own specific tasks; some objects may be moving in a similar direction
to the robot, whereas others may be moving in the opposite direction. The key requirement
for any mobile robot is to avoid colliding with any object which may prevent it from reaching
its goal, or as a consequence bring harm to any individual within its workspace. This challenge
is further complicated by unobserved objects suddenly appearing in the robots path,
particularly when the robot crosses a corridor or an open doorway. Therefore the mobile
robot must be able to anticipate such scenarios and manoeuvre quickly to avoid collisions.
In this project, a hybrid control architecture has been designed to navigate within dynamic
environments. The control system includes three levels namely: deliberative, intermediate
and reactive, which work together to achieve short, fast and safe navigation. The deliberative
level creates a short and safe path from the current position of the mobile robot to its goal
using the wavefront algorithm, estimates the current location of the mobile robot, and extracts
the region from which unobserved objects may appear. The intermediate level links the
deliberative level and the reactive level, that includes several behaviours for implementing
the global path in such a way to avoid any collision.
In avoiding dynamic obstacles, the controller has to identify and extract obstacles from the
sensor data, estimate their speeds, and then regular its speed and direction to minimize the
collision risk and maximize the speed to the goal. The velocity obstacle approach (VO) is
considered an easy and simple method for avoiding dynamic obstacles, whilst the collision
cone principle is used to detect the collision situation between two circular-shaped objects.
However the VO approach has two challenges when applied in indoor environments. The
first challenge is extraction of collision cones of non-circular objects from sensor data, in
which applying fitting circle methods generally produces large and inaccurate collision cones
especially for line-shaped obstacle such as walls. The second challenge is that the mobile
robot cannot sometimes move to its goal because all its velocities to the goal are located
within collision cones. In this project, a method has been demonstrated to extract the colliii
sion cones of circular and non-circular objects using a laser sensor, where the obstacle size
and the collision time are considered to weigh the robot velocities. In addition the principle
of the virtual obstacle was proposed to minimize the collision risk with unobserved moving
obstacles. The simulation and experiments using the proposed control system on a Pioneer
mobile robot showed that the mobile robot can successfully avoid static and dynamic obstacles.
Furthermore the mobile robot was able to reach its target within an indoor environment
without causing any collision or missing the target
Cooperative Material Handling by Human and Robotic Agents:Module Development and System Synthesis
In this paper we present the results of a collaborative effort to design and implement a system for cooperative material handling by a small team of human and robotic agents in an unstructured indoor environment. Our approach makes fundamental use of human agents\u27 expertise for aspects of task planning, task monitoring, and error recovery. Our system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of human abilities within the present state of the art of autonomous systems. It is designed to allow for and promote cooperative interaction between distributed agents with various capabilities and resources. Our robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. Our robotic agents are not required to be homogeneous with respect to either capabilities or function. Our research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to the basic functioning of such a cooperative multi-agent system. We have constructed a testbed facility for experimenting with distributed multi-agent architectures. The required modular components of this testbed are currently operational and have been tested individually. Our current research focuses on the integration of agents in a scenario for cooperative material handling
Predictive Maneuver Planning and Control of an Autonomous Vehicle in Multi-Vehicle Traffic with Observation Uncertainty
Autonomous vehicle technology is a promising development for improving the safety, efficiency and environmental impact of on-road transportation systems. However, the task of guiding an autonomous vehicle by rapidly and systematically accommodating the plethora of changing constraints, e.g. of avoiding multiple stationary and moving obstacles, obeying traffic rules, signals and so on as well as the uncertain state observation due to sensor imperfections, remains a major challenge. This dissertation attempts to address this challenge via designing a robust and efficient predictive motion planning framework that can generate the appropriate vehicle maneuvers (selecting and tracking specific lanes, and related speed references) as well as the constituent motion trajectories while considering the differential vehicle kinematics of the controlled vehicle and other constraints of operating in public traffic. The main framework combines a finite state machine (FSM)-based maneuver decision module with a model predictive control (MPC)-based trajectory planner. Based on the prediction of the traffic environment, reference speeds are assigned to each lane in accordance with the detection of objects during measurement update. The lane selection decisions themselves are then incorporated within the MPC optimization. The on-line maneuver/motion planning effort for autonomous vehicles in public traffic is a non-convex problem due to the multiple collision avoidance constraints with overlapping areas, lane boundaries, and nonlinear vehicle-road dynamics constraints. This dissertation proposes and derives some remedies for these challenges within the planning framework to improve the feasibility and optimality of the solution. Specifically, it introduces vehicle grouping notions and derives conservative and smooth algebraic models to describe the overlapped space of several individual infeasible spaces and help prevent the optimization from falling into undesired local minima. Furthermore, in certain situations, a forced objective selection strategy is needed and adopted to help the optimization jump out of local minima. Furthermore, the dissertation considers stochastic uncertainties prevalent in dynamic and complex traffic and incorporate them with in the predictive planning and control framework. To this end, Bayesian filters are implemented to estimate the uncertainties in object motions and then propagate them into the prediction horizon. Then, a pair-wise probabilistic collision condition is defined for objects with non-negligible geometrical shape/sizes and computationally efficient and conservative forms are derived to efficiently and analytically approximate the involved multi-variate integrals. The probabilistic collision evaluation is then applied within a vehicle grouping algorithms to cluster the object vehicles with closeness in positions and speeds and eventually within the stochastic predictive maneuver planner framework to tighten the chanced-constraints given a deterministic confidence margin. It is argued that these steps make the planning problem tractable for real-time implementation on autonomously controlled vehicles
Recommended from our members
Visual recognition of bridges by using stereo cameras on trains
Recognition of either patterns or objects in mobile systems continues to be in the focus of intensive research, with many applications being enhanced by integrating environment related information. This paper presents a practical technique for detecting and recognizing bridges from a train using a stereo camera which provides depth and grayscale images. The algorithm has been applied to a train system, where object detection combined with a given map of an area is used to improve localization. The approach is based on the detection of primitive features including edges and corners in the depth image. The pairwise spatial relations between the features are then modeled by a graph, so the classification and detection can be performed by a probabilistic Markov Random Field framework. The algorithm has been tested on the real-life datasets of the Rail Collision Avoidance System (RCAS) project. The presented results prove the applicability of the framework for detection of objects by exploiting geometrical appearance constraints
Trajectory generation for lane-change maneuver of autonomous vehicles
Lane-change maneuver is one of the most thoroughly investigated automatic driving operations that can be used by an autonomous self-driving vehicle as a primitive for performing more complex operations like merging, entering/exiting highways or overtaking another vehicle. This thesis focuses on two coherent problems that are associated with the trajectory generation for lane-change maneuvers of autonomous vehicles in a highway scenario: (i) an effective velocity estimation of neighboring vehicles under different road scenarios involving linear and curvilinear motion of the vehicles, and (ii) trajectory generation based on the estimated velocities of neighboring vehicles for safe operation of self-driving cars during lane-change maneuvers. ^ We first propose a two-stage, interactive-multiple-model-based estimator to perform multi-target tracking of neighboring vehicles in a lane-changing scenario. The first stage deals with an adaptive window based turn-rate estimation for tracking maneuvering target vehicles using Kalman filter. In the second stage, variable-structure models with updated estimated turn-rate are utilized to perform data association followed by velocity estimation. Based on the estimated velocities of neighboring vehicles, piecewise Bezier-curve-based methods that minimize the safety/collision risk involved and maximize the comfort ride have been developed for the generation of desired trajectory for lane-change maneuvers. The proposed velocity-estimation and trajectory-generation algorithms have been validated experimentally using Pioneer3- DX mobile robots in a simulated lane-change environment as well as validated by computer simulations
Lidar-based scene understanding for autonomous driving using deep learning
With over 1.35 million fatalities related to traffic accidents worldwide, autonomous driving was foreseen at the beginning of this century as a feasible solution to improve security in our roads. Nevertheless, it is meant to disrupt our transportation paradigm, allowing to reduce congestion, pollution, and costs, while increasing the accessibility, efficiency, and reliability of the transportation for both people and goods. Although some advances have gradually been transferred into commercial vehicles in the way of Advanced Driving Assistance Systems (ADAS) such as adaptive cruise control, blind spot detection or automatic parking, however, the technology is far from mature. A full understanding of the scene is actually needed so that allowing the vehicles to be aware of the surroundings, knowing the existing elements of the scene, as well as their motion, intentions and interactions.
In this PhD dissertation, we explore new approaches for understanding driving scenes from 3D LiDAR point clouds by using Deep Learning methods. To this end, in Part I we analyze the scene from a static perspective using independent frames to detect the neighboring vehicles. Next, in Part II we develop new ways for understanding the dynamics of the scene. Finally, in Part III we apply all the developed methods to accomplish higher level challenges such as segmenting moving obstacles while obtaining their rigid motion vector over the ground.
More specifically, in Chapter 2 we develop a 3D vehicle detection pipeline based on a multi-branch deep-learning architecture and propose a Front (FR-V) and a Bird’s Eye view (BE-V) as 2D representations of the 3D point cloud to serve as input for training our models. Later on, in Chapter 3 we apply and further test this method on two real uses-cases, for pre-filtering moving
obstacles while creating maps to better localize ourselves on subsequent days, as well as for vehicle tracking. From the dynamic perspective, in Chapter 4 we learn from the 3D point cloud a novel dynamic feature that resembles optical flow from RGB images. For that, we develop a new approach to leverage RGB optical flow as pseudo ground truth for training purposes but allowing the use of only 3D LiDAR data at inference time. Additionally, in Chapter 5 we explore the benefits of combining classification and regression learning problems to face the optical flow estimation task in a joint coarse-and-fine manner. Lastly, in Chapter 6 we gather the previous methods and demonstrate that with these independent tasks we can guide the learning of higher challenging problems such as segmentation and motion estimation of moving vehicles from our own moving perspective.Con más de 1,35 millones de muertes por accidentes de tráfico en el mundo, a principios de siglo se predijo que la conducción autónoma serÃa una solución viable para mejorar la seguridad en nuestras carreteras. Además la conducción autónoma está destinada a cambiar nuestros paradigmas de transporte, permitiendo reducir la congestión del tráfico, la contaminación y el coste, a la vez que aumentando la accesibilidad, la eficiencia y confiabilidad del transporte tanto de personas como de mercancÃas. Aunque algunos avances, como el control de crucero adaptativo, la detección de puntos ciegos o el estacionamiento automático, se han transferido gradualmente a vehÃculos comerciales en la forma de los Sistemas Avanzados de Asistencia a la Conducción (ADAS), la tecnologÃa aún no ha alcanzado el suficiente grado de madurez. Se necesita una comprensión completa de la escena para que los vehÃculos puedan entender el entorno, detectando los elementos presentes, asà como su movimiento, intenciones e interacciones. En la presente tesis doctoral, exploramos nuevos enfoques para comprender escenarios de conducción utilizando nubes de puntos en 3D capturadas con sensores LiDAR, para lo cual empleamos métodos de aprendizaje profundo. Con este fin, en la Parte I analizamos la escena desde una perspectiva estática para detectar vehÃculos. A continuación, en la Parte II, desarrollamos nuevas formas de entender las dinámicas del entorno. Finalmente, en la Parte III aplicamos los métodos previamente desarrollados para lograr desafÃos de nivel superior, como segmentar obstáculos dinámicos a la vez que estimamos su vector de movimiento sobre el suelo. EspecÃficamente, en el CapÃtulo 2 detectamos vehÃculos en 3D creando una arquitectura de aprendizaje profundo de dos ramas y proponemos una vista frontal (FR-V) y una vista de pájaro (BE-V) como representaciones 2D de la nube de puntos 3D que sirven como entrada para entrenar nuestros modelos. Más adelante, en el CapÃtulo 3 aplicamos y probamos aún más este método en dos casos de uso reales, tanto para filtrar obstáculos en movimiento previamente a la creación de mapas sobre los que poder localizarnos mejor en los dÃas posteriores, como para el seguimiento de vehÃculos. Desde la perspectiva dinámica, en el CapÃtulo 4 aprendemos de la nube de puntos en 3D una caracterÃstica dinámica novedosa que se asemeja al flujo óptico sobre imágenes RGB. Para ello, desarrollamos un nuevo enfoque que aprovecha el flujo óptico RGB como pseudo muestras reales para entrenamiento, usando solo information 3D durante la inferencia. Además, en el CapÃtulo 5 exploramos los beneficios de combinar los aprendizajes de problemas de clasificación y regresión para la tarea de estimación de flujo óptico de manera conjunta. Por último, en el CapÃtulo 6 reunimos los métodos anteriores y demostramos que con estas tareas independientes podemos guiar el aprendizaje de problemas de más alto nivel, como la segmentación y estimación del movimiento de vehÃculos desde nuestra propia perspectivaAmb més d’1,35 milions de morts per accidents de trà nsit al món, a principis de segle es va
predir que la conducció autònoma es convertiria en una solució viable per millorar la seguretat
a les nostres carreteres. D’altra banda, la conducció autònoma està destinada a canviar els
paradigmes del transport, fent possible aixà reduir la densitat del trà nsit, la contaminació i
el cost, alhora que augmentant l’accessibilitat, l’eficiència i la confiança del transport tant de
persones com de mercaderies. Encara que alguns avenços, com el control de creuer adaptatiu,
la detecció de punts cecs o l’estacionament automà tic, s’han transferit gradualment a vehicles
comercials en forma de Sistemes Avançats d’Assistència a la Conducció (ADAS), la tecnologia
encara no ha arribat a aconseguir el grau suficient de maduresa. És necessà ria, doncs, una
total comprensió de l’escena de manera que els vehicles puguin entendre l’entorn, detectant els
elements presents, aixà com el seu moviment, intencions i interaccions.
A la present tesi doctoral, explorem nous enfocaments per tal de comprendre les diferents
escenes de conducció utilitzant núvols de punts en 3D capturats amb sensors LiDAR, mitjançant
l’ús de mètodes d’aprenentatge profund. Amb aquest objectiu, a la Part I analitzem l’escena des
d’una perspectiva està tica per a detectar vehicles. A continuació, a la Part II, desenvolupem
noves formes d’entendre les dinà miques de l’entorn. Finalment, a la Part III apliquem els
mètodes prèviament desenvolupats per a aconseguir desafiaments d’un nivell superior, com, per
exemple, segmentar obstacles dinà mics al mateix temps que estimem el seu vector de moviment
respecte al terra.
Concretament, al CapÃtol 2 detectem vehicles en 3D creant una arquitectura d’aprenentatge
profund amb dues branques, i proposem una vista frontal (FR-V) i una vista d’ocell (BE-V)
com a representacions 2D del núvol de punts 3D que serveixen com a punt de partida per
entrenar els nostres models. Més endavant, al CapÃtol 3 apliquem i provem de nou aquest
mètode en dos casos d’ús reals, tant per filtrar obstacles en moviment prèviament a la creació
de mapes en els quals poder localitzar-nos millor en dies posteriors, com per dur a terme
el seguiment de vehicles. Des de la perspectiva dinà mica, al CapÃtol 4 aprenem una nova
caracterÃstica dinà mica del núvol de punts en 3D que s’assembla al flux òptic sobre imatges
RGB. Per a fer-ho, desenvolupem un nou enfocament que aprofita el flux òptic RGB com pseudo
mostres reals per a entrenament, utilitzant només informació 3D durant la inferència. Després,
al CapÃtol 5 explorem els beneficis que s’obtenen de combinar els aprenentatges de problemes
de classificació i regressió per la tasca d’estimació de flux òptic de manera conjunta. Finalment,
al CapÃtol 6 posem en comú els mètodes anteriors i demostrem que mitjançant aquests processos
independents podem abordar l’aprenentatge de problemes més complexos, com la segmentació
i estimació del moviment de vehicles des de la nostra pròpia perspectiva
Perception architecture exploration for automotive cyber-physical systems
2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions
- …