1,175 research outputs found
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles
GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests
Autonomous navigation of unmanned vehicles in forests is a challenging task. In such environments, due to the canopies of the trees, information from Global Navigation Satellite Systems (GNSS) can be degraded or even unavailable. Also, because of the large number of obstacles, a previous detailed map of the environment is not practical. In this paper, we solve the complete navigation problem of an aerial robot in a sparse forest, where there is enough space for the flight and the GNSS signals can be sporadically detected. For localization, we propose a state estimator that merges information from GNSS, Attitude and Heading Reference Systems (AHRS), and odometry based on Light Detection and Ranging (LiDAR) sensors. In our LiDAR-based odometry solution, the trunks of the trees are used in a feature-based scan matching algorithm to estimate the relative movement of the vehicle. Our method employs a robust adaptive fusion algorithm based on the unscented Kalman filter. For motion control, we adopt a strategy that integrates a vector field, used to impose the main direction of the movement for the robot, with an optimal probabilistic planner, which is responsible for obstacle avoidance. Experiments with a quadrotor equipped with a planar LiDAR in an actual forest environment is used to illustrate the effectiveness of our approach
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Control and communication systems for automated vehicles cooperation and coordination
Mención Internacional en el título de doctorThe technological advances in the Intelligent Transportation Systems (ITS) are exponentially
improving over the last century. The objective is to provide intelligent and innovative services
for the different modes of transportation, towards a better, safer, coordinated and smarter
transport networks. The Intelligent Transportation Systems (ITS) focus is divided into two
main categories; the first is to improve existing components of the transport networks, while
the second is to develop intelligent vehicles which facilitate the transportation process. Different
research efforts have been exerted to tackle various aspects in the fields of the automated
vehicles. Accordingly, this thesis is addressing the problem of multiple automated vehicles
cooperation and coordination. At first, 3DCoAutoSim driving simulator was developed
in Unity game engine and connected to Robot Operating System (ROS) framework and
Simulation of Urban Mobility (SUMO). 3DCoAutoSim is an abbreviation for "3D Simulator
for Cooperative Advanced Driver Assistance Systems (ADAS) and Automated Vehicles
Simulator". 3DCoAutoSim was tested under different circumstances and conditions, afterward,
it was validated through carrying-out several controlled experiments and compare
the results against their counter reality experiments. The obtained results showed the efficiency
of the simulator to handle different situations, emulating real world vehicles. Next
is the development of the iCab platforms, which is an abbreviation for "Intelligent Campus
Automobile". The platforms are two electric golf-carts that were modified mechanically, electronically
and electrically towards the goal of automated driving. Each iCab was equipped
with several on-board embedded computers, perception sensors and auxiliary devices, in
order to execute the necessary actions for self-driving. Moreover, the platforms are capable
of several Vehicle-to-Everything (V2X) communication schemes, applying three layers of
control, utilizing cooperation architecture for platooning, executing localization systems,
mapping systems, perception systems, and finally several planning systems. Hundreds of
experiments were carried-out for the validation of each system in the iCab platform. Results
proved the functionality of the platform to self-drive from one point to another with minimal
human intervention.Los avances tecnológicos en Sistemas Inteligentes de Transporte (ITS) han crecido de forma
exponencial durante el último siglo. El objetivo de estos avances es el de proveer de sistemas
innovadores e inteligentes para ser aplicados a los diferentes medios de transporte, con el fin
de conseguir un transporte mas eficiente, seguro, coordinado e inteligente. El foco de los ITS
se divide principalmente en dos categorías; la primera es la mejora de los componentes ya
existentes en las redes de transporte, mientras que la segunda es la de desarrollar vehículos
inteligentes que hagan más fácil y eficiente el transporte. Diferentes esfuerzos de investigación
se han llevado a cabo con el fin de solucionar los numerosos aspectos asociados con
la conducción autónoma. Esta tesis propone una solución para la cooperación y coordinación
de múltiples vehículos. Para ello, en primer lugar se desarrolló un simulador (3DCoAutoSim)
de conducción basado en el motor de juegos Unity, conectado al framework Robot Operating
System (ROS) y al simulador Simulation of Urban Mobility (SUMO). 3DCoAutoSim ha
sido probado en diferentes condiciones y circunstancias, para posteriormente validarlo con
resultados a través de varios experimentos reales controlados. Los resultados obtenidos
mostraron la eficiencia del simulador para manejar diferentes situaciones, emulando los
vehículos en el mundo real. En segundo lugar, se desarrolló la plataforma de investigación
Intelligent Campus Automobile (iCab), que consiste en dos carritos eléctricos de golf, que
fueron modificados eléctrica, mecánica y electrónicamente para darle capacidades autónomas.
Cada iCab se equipó con diferentes computadoras embebidas, sensores de percepción y
unidades auxiliares, con la finalidad de transformarlos en vehículos autónomos. Además,
se les han dado capacidad de comunicación multimodal (V2X), se les han aplicado tres
capas de control, incorporando una arquitectura de cooperación para operación en modo
tren, diferentes esquemas de localización, mapeado, percepción y planificación de rutas.
Innumerables experimentos han sido realizados para validar cada uno de los diferentes sistemas
incorporados. Los resultados prueban la funcionalidad de esta plataforma para realizar
conducción autónoma y cooperativa con mínima intervención humana.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Francisco Javier Otamendi Fernández de la Puebla.- Secretario: Hanno Hildmann.- Vocal: Pietro Cerr
Fail-Aware LIDAR-Based Odometry for Autonomous Vehicles
Autonomous driving systems are set to become a reality in transport systems
and, so, maximum acceptance is being sought among users. Currently, the most
advanced architectures require driver intervention when functional system
failures or critical sensor operations take place, presenting problems related
to driver state, distractions, fatigue, and other factors that prevent safe
control. Therefore, this work presents a redundant, accurate, robust, and
scalable LiDAR odometry system with fail-aware system features that can allow
other systems to perform a safe stop manoeuvre without driver mediation. All
odometry systems have drift error, making it difficult to use them for
localisation tasks over extended periods. For this reason, the paper presents
an accurate LiDAR odometry system with a fail-aware indicator. This indicator
estimates a time window in which the system manages the localisation tasks
appropriately. The odometry error is minimised by applying a dynamic 6-DoF
model and fusing measures based on the Iterative Closest Points (ICP),
environment feature extraction, and Singular Value Decomposition (SVD) methods.
The obtained results are promising for two reasons: First, in the KITTI
odometry data set, the ranking achieved by the proposed method is twelfth,
considering only LiDAR-based methods, where its translation and rotation errors
are 1.00% and 0.0041 deg/m, respectively. Second, the encouraging results of
the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry
system. The results depict that, in order to achieve an accurate odometry
system, complex models and measurement fusion techniques must be used to
improve its behaviour. Furthermore, if an odometry system is to be used for
redundant localisation features, it must integrate a fail-aware indicator for
use in a safe manner
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
- …