92 research outputs found
Intelligent Robotic Perception Systems
Robotic perception is related to many applications in robotics where sensory data and artificial intelligence/machine learning (AI/ML) techniques are involved. Examples of such applications are object detection, environment representation, scene understanding, human/pedestrian detection, activity recognition, semantic place classification, object modeling, among others. Robotic perception, in the scope of this chapter, encompasses the ML algorithms and techniques that empower robots to learn from sensory data and, based on learned models, to react and take decisions accordingly. The recent developments in machine learning, namely deep-learning approaches, are evident and, consequently, robotic perception systems are evolving in a way that new applications and tasks are becoming a reality. Recent advances in human-robot interaction, complex robotic tasks, intelligent reasoning, and decision-making are, at some extent, the results of the notorious evolution and success of ML algorithms. This chapter will cover recent and emerging topics and use-cases related to intelligent perception systems in robotics
Lidar-based scene understanding for autonomous driving using deep learning
With over 1.35 million fatalities related to traffic accidents worldwide, autonomous driving was foreseen at the beginning of this century as a feasible solution to improve security in our roads. Nevertheless, it is meant to disrupt our transportation paradigm, allowing to reduce congestion, pollution, and costs, while increasing the accessibility, efficiency, and reliability of the transportation for both people and goods. Although some advances have gradually been transferred into commercial vehicles in the way of Advanced Driving Assistance Systems (ADAS) such as adaptive cruise control, blind spot detection or automatic parking, however, the technology is far from mature. A full understanding of the scene is actually needed so that allowing the vehicles to be aware of the surroundings, knowing the existing elements of the scene, as well as their motion, intentions and interactions.
In this PhD dissertation, we explore new approaches for understanding driving scenes from 3D LiDAR point clouds by using Deep Learning methods. To this end, in Part I we analyze the scene from a static perspective using independent frames to detect the neighboring vehicles. Next, in Part II we develop new ways for understanding the dynamics of the scene. Finally, in Part III we apply all the developed methods to accomplish higher level challenges such as segmenting moving obstacles while obtaining their rigid motion vector over the ground.
More specifically, in Chapter 2 we develop a 3D vehicle detection pipeline based on a multi-branch deep-learning architecture and propose a Front (FR-V) and a Bird’s Eye view (BE-V) as 2D representations of the 3D point cloud to serve as input for training our models. Later on, in Chapter 3 we apply and further test this method on two real uses-cases, for pre-filtering moving
obstacles while creating maps to better localize ourselves on subsequent days, as well as for vehicle tracking. From the dynamic perspective, in Chapter 4 we learn from the 3D point cloud a novel dynamic feature that resembles optical flow from RGB images. For that, we develop a new approach to leverage RGB optical flow as pseudo ground truth for training purposes but allowing the use of only 3D LiDAR data at inference time. Additionally, in Chapter 5 we explore the benefits of combining classification and regression learning problems to face the optical flow estimation task in a joint coarse-and-fine manner. Lastly, in Chapter 6 we gather the previous methods and demonstrate that with these independent tasks we can guide the learning of higher challenging problems such as segmentation and motion estimation of moving vehicles from our own moving perspective.Con más de 1,35 millones de muertes por accidentes de tráfico en el mundo, a principios de siglo se predijo que la conducción autónoma serÃa una solución viable para mejorar la seguridad en nuestras carreteras. Además la conducción autónoma está destinada a cambiar nuestros paradigmas de transporte, permitiendo reducir la congestión del tráfico, la contaminación y el coste, a la vez que aumentando la accesibilidad, la eficiencia y confiabilidad del transporte tanto de personas como de mercancÃas. Aunque algunos avances, como el control de crucero adaptativo, la detección de puntos ciegos o el estacionamiento automático, se han transferido gradualmente a vehÃculos comerciales en la forma de los Sistemas Avanzados de Asistencia a la Conducción (ADAS), la tecnologÃa aún no ha alcanzado el suficiente grado de madurez. Se necesita una comprensión completa de la escena para que los vehÃculos puedan entender el entorno, detectando los elementos presentes, asà como su movimiento, intenciones e interacciones. En la presente tesis doctoral, exploramos nuevos enfoques para comprender escenarios de conducción utilizando nubes de puntos en 3D capturadas con sensores LiDAR, para lo cual empleamos métodos de aprendizaje profundo. Con este fin, en la Parte I analizamos la escena desde una perspectiva estática para detectar vehÃculos. A continuación, en la Parte II, desarrollamos nuevas formas de entender las dinámicas del entorno. Finalmente, en la Parte III aplicamos los métodos previamente desarrollados para lograr desafÃos de nivel superior, como segmentar obstáculos dinámicos a la vez que estimamos su vector de movimiento sobre el suelo. EspecÃficamente, en el CapÃtulo 2 detectamos vehÃculos en 3D creando una arquitectura de aprendizaje profundo de dos ramas y proponemos una vista frontal (FR-V) y una vista de pájaro (BE-V) como representaciones 2D de la nube de puntos 3D que sirven como entrada para entrenar nuestros modelos. Más adelante, en el CapÃtulo 3 aplicamos y probamos aún más este método en dos casos de uso reales, tanto para filtrar obstáculos en movimiento previamente a la creación de mapas sobre los que poder localizarnos mejor en los dÃas posteriores, como para el seguimiento de vehÃculos. Desde la perspectiva dinámica, en el CapÃtulo 4 aprendemos de la nube de puntos en 3D una caracterÃstica dinámica novedosa que se asemeja al flujo óptico sobre imágenes RGB. Para ello, desarrollamos un nuevo enfoque que aprovecha el flujo óptico RGB como pseudo muestras reales para entrenamiento, usando solo information 3D durante la inferencia. Además, en el CapÃtulo 5 exploramos los beneficios de combinar los aprendizajes de problemas de clasificación y regresión para la tarea de estimación de flujo óptico de manera conjunta. Por último, en el CapÃtulo 6 reunimos los métodos anteriores y demostramos que con estas tareas independientes podemos guiar el aprendizaje de problemas de más alto nivel, como la segmentación y estimación del movimiento de vehÃculos desde nuestra propia perspectivaAmb més d’1,35 milions de morts per accidents de trà nsit al món, a principis de segle es va
predir que la conducció autònoma es convertiria en una solució viable per millorar la seguretat
a les nostres carreteres. D’altra banda, la conducció autònoma està destinada a canviar els
paradigmes del transport, fent possible aixà reduir la densitat del trà nsit, la contaminació i
el cost, alhora que augmentant l’accessibilitat, l’eficiència i la confiança del transport tant de
persones com de mercaderies. Encara que alguns avenços, com el control de creuer adaptatiu,
la detecció de punts cecs o l’estacionament automà tic, s’han transferit gradualment a vehicles
comercials en forma de Sistemes Avançats d’Assistència a la Conducció (ADAS), la tecnologia
encara no ha arribat a aconseguir el grau suficient de maduresa. És necessà ria, doncs, una
total comprensió de l’escena de manera que els vehicles puguin entendre l’entorn, detectant els
elements presents, aixà com el seu moviment, intencions i interaccions.
A la present tesi doctoral, explorem nous enfocaments per tal de comprendre les diferents
escenes de conducció utilitzant núvols de punts en 3D capturats amb sensors LiDAR, mitjançant
l’ús de mètodes d’aprenentatge profund. Amb aquest objectiu, a la Part I analitzem l’escena des
d’una perspectiva està tica per a detectar vehicles. A continuació, a la Part II, desenvolupem
noves formes d’entendre les dinà miques de l’entorn. Finalment, a la Part III apliquem els
mètodes prèviament desenvolupats per a aconseguir desafiaments d’un nivell superior, com, per
exemple, segmentar obstacles dinà mics al mateix temps que estimem el seu vector de moviment
respecte al terra.
Concretament, al CapÃtol 2 detectem vehicles en 3D creant una arquitectura d’aprenentatge
profund amb dues branques, i proposem una vista frontal (FR-V) i una vista d’ocell (BE-V)
com a representacions 2D del núvol de punts 3D que serveixen com a punt de partida per
entrenar els nostres models. Més endavant, al CapÃtol 3 apliquem i provem de nou aquest
mètode en dos casos d’ús reals, tant per filtrar obstacles en moviment prèviament a la creació
de mapes en els quals poder localitzar-nos millor en dies posteriors, com per dur a terme
el seguiment de vehicles. Des de la perspectiva dinà mica, al CapÃtol 4 aprenem una nova
caracterÃstica dinà mica del núvol de punts en 3D que s’assembla al flux òptic sobre imatges
RGB. Per a fer-ho, desenvolupem un nou enfocament que aprofita el flux òptic RGB com pseudo
mostres reals per a entrenament, utilitzant només informació 3D durant la inferència. Després,
al CapÃtol 5 explorem els beneficis que s’obtenen de combinar els aprenentatges de problemes
de classificació i regressió per la tasca d’estimació de flux òptic de manera conjunta. Finalment,
al CapÃtol 6 posem en comú els mètodes anteriors i demostrem que mitjançant aquests processos
independents podem abordar l’aprenentatge de problemes més complexos, com la segmentació
i estimació del moviment de vehicles des de la nostra pròpia perspectiva
Human-Centered Navigation and Person-Following with Omnidirectional Robot for Indoor Assistance and Monitoring
Robot assistants and service robots are rapidly spreading out as cutting-edge automation solutions to support people in their everyday life in workplaces, health centers, and domestic environments. Moreover, the COVID-19 pandemic drastically increased the need for service technology to help medical personnel in critical conditions in hospitals and domestic scenarios. The first requirement for an assistive robot is to navigate and follow the user in dynamic environments in complete autonomy. However, these advanced multitask behaviors require flexible mobility of the platform to accurately avoid obstacles in cluttered spaces while tracking the user. This paper presents a novel human-centered navigation system that successfully combines a real-time visual perception system with the mobility advantages provided by an omnidirectional robotic platform to precisely adjust the robot orientation and monitor a person while navigating. Our extensive experimentation conducted in a representative indoor scenario demonstrates that our solution offers efficient and safe motion planning for person-following and, more generally, for human-centered navigation tasks
PALMAR: Towards Adaptive Multi-inhabitant Activity Recognition in Point-Cloud Technology
With the advancement of deep neural networks and computer vision-based Human
Activity Recognition, employment of Point-Cloud Data technologies (LiDAR,
mmWave) has seen a lot interests due to its privacy preserving nature. Given
the high promise of accurate PCD technologies, we develop, PALMAR, a
multiple-inhabitant activity recognition system by employing efficient signal
processing and novel machine learning techniques to track individual person
towards developing an adaptive multi-inhabitant tracking and HAR system. More
specifically, we propose (i) a voxelized feature representation-based real-time
PCD fine-tuning method, (ii) efficient clustering (DBSCAN and BIRCH), Adaptive
Order Hidden Markov Model based multi-person tracking and crossover ambiguity
reduction techniques and (iii) novel adaptive deep learning-based domain
adaptation technique to improve the accuracy of HAR in presence of data
scarcity and diversity (device, location and population diversity). We
experimentally evaluate our framework and systems using (i) a real-time PCD
collected by three devices (3D LiDAR and 79 GHz mmWave) from 6 participants,
(ii) one publicly available 3D LiDAR activity data (28 participants) and (iii)
an embedded hardware prototype system which provided promising HAR performances
in multi-inhabitants (96%) scenario with a 63% improvement of multi-person
tracking than state-of-art framework without losing significant system
performances in the edge computing device.Comment: Accepted in IEEE International Conference on Computer Communications
202
Semantic situation awareness of ellipse shapes via deep learning for multirotor aerial robots with a 2D LIDAR
In this work, we present a semantic situation awareness system for multirotor aerial robots equipped with a 2D LIDAR sensor, focusing on the understanding of the environment, provided to have a drift-free precise localization of the robot (e.g. given by GNSS/INS or motion capture system).
Our algorithm generates in real-time a semantic map of the objects of the environment as a list of ellipses represented by their radii, and their pose and velocity, both in world coordinates.
Two different Convolutional Neural Network (CNN) architectures are proposed and trained using an artificially generated dataset and a custom loss function, to detect ellipses in a segmented (i.e. with one single object) LIDAR measurement.
In cascade, a specifically designed indirect-EKF estimates the ellipses based semantic map in world coordinates, as well as their velocity.
We have quantitative and qualitatively evaluated the performance of our proposed situation awareness system.
Two sets of Software-In-The-Loop simulations using CoppeliaSim with one and multiple static and moving cylindrical objects are used to evaluate the accuracy and performance of our algorithm.
In addition, we have demonstrated the robustness of our proposed algorithm when handling real environments thanks to real laboratory experiments with non-cylindrical static (i.e. a barrel) objects and moving persons
From Perception to Navigation in Environments with Persons: An Indoor Evaluation of the State of the Art
Research in the field of social robotics is allowing service robots to operate in environments with people. In the aim of realizing the vision of humans and robots coexisting in the same environment, several solutions have been proposed to (1) perceive persons and objects in the immediate environment; (2) predict the movements of humans; as well as (3) plan the navigation in agreement with socially accepted rules. In this work, we discuss the different aspects related to social navigation in the context of our experience in an indoor environment. We describe state-of-the-art approaches and experiment with existing methods to analyze their performance in practice. From this study, we gather first-hand insights into the limitations of current solutions and identify possible research directions to address the open challenges. In particular, this paper focuses on topics related to perception at the hardware and application levels, including 2D and 3D sensors, geometric and mainly semantic mapping, the prediction of people trajectories (physics-, pattern- and planning-based), and social navigation (reactive and predictive) in indoor environments
Dataset of Panoramic Images for People Tracking in Service Robotics
We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility
- …