1,301 research outputs found
Navigace mobilních robotů v neznámém prostředí s využitím měření vzdáleností
The ability of a robot to navigate itself in the environment is a crucial step towards its autonomy. Navigation as a subtask of the development of autonomous robots is the subject of this thesis, focusing on the development of a method for simultaneous localization an mapping (SLAM) of mobile robots in six degrees of freedom (DOF). As a part of this research, a platform for 3D range data acquisition based on a continuously inclined laser rangefinder was developed. This platform is presented, evaluating the measurements and also presenting the robotic equipment on which the platform can be fitted. The localization and mapping task is equal to the registration of multiple 3D images into a common frame of reference. For this purpose, a method based on the Iterative Closest Point (ICP) algorithm was developed. First, the originally implemented SLAM method is presented, focusing on the time-wise performance and the registration quality issues introduced by the implemented algorithms. In order to accelerate and improve the quality of the time-demanding 6DOF image registration, an extended method was developed. The major extension is the introduction of a factorized registration, extracting 2D representations of vertical objects called leveled maps from the 3D point sets, ensuring these representations are 3DOF invariant. The extracted representations are registered in 3DOF using ICP algorithm, allowing pre-alignment of the 3D data for the subsequent robust 6DOF ICP based registration. The extended method is presented, showing all important modifications to the original method. The developed registration method was evaluated using real 3D data acquired in different indoor environments, examining the benefits of the factorization and other extensions as well as the performance of the original ICP based method. The factorization gives promising results compared to a single phase 6DOF registration in vertically structured environments. Also, the disadvantages of the method are discussed, proposing possible solutions. Finally, the future prospects of the research are presented.Schopnost lokalizace a navigace je podmínkou autonomního provozu mobilních robotů. Předmětem této disertační práce jsou navigační metody se zaměřením na metodu pro simultánní lokalizaci a mapování (SLAM) mobilních robotů v šesti stupních volnosti (6DOF). Nedílnou součástí tohoto výzkumu byl vývoj platformy pro sběr 3D vzdálenostních dat s využitím kontinuálně naklápěného laserového řádkového scanneru. Tato platforma byla vyvinuta jako samostatný modul, aby mohla být umístěna na různé šasi mobilních robotů. Úkol lokalizace a mapování je ekvivalentní registraci více 3D obrazů do společného souřadného systému. Pro tyto účely byla vyvinuta metoda založená na algoritmu Iterative Closest Point Algorithm (ICP). Původně implementovaná verze navigační metody využívá ICP s akcelerací pomocí kd-stromů přičemž jsou zhodnoceny její kvalitativní a výkonnostní aspekty. Na základě této analýzy byly vyvinuty rozšíření původní metody založené na ICP. Jednou z hlavních modifikací je faktorizace registračního procesu, kdy tato faktorizace je založena na redukci dat: vytvoření 2D „leveled“ map (ve smyslu jednoúrovňových map) ze 3D vzdálenostních obrazů. Pro tuto redukci je technologicky i algoritmicky zajištěna invariantnost těchto map vůči třem stupňům volnosti. Tyto redukované mapy jsou registrovány pomocí ICP ve zbylých třech stupních volnosti, přičemž získaná transformace je aplikována na 3D data za účelem před-registrace 3D obrazů. Následně je provedena robustní 6DOF registrace. Rozšířená metoda je v disertační práci v popsána spolu se všemi podstatnými modifikacemi. Vyvinutá metoda byla otestována a zhodnocena s využitím skutečných 3D vzdálenostních dat naměřených v různých vnitřních prostředích. Jsou zhodnoceny přínosy faktorizace a jiných modifikací ve srovnání s původní jednofázovou 6DOF registrací, také jsou zmíněny nevýhody implementované metody a navrženy způsoby jejich řešení. Nakonec následuje návrh budoucího výzkumu a diskuse o možnostech dalšího rozvoje.
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Scan matching by cross-correlation and differential evolution
Scan matching is an important task, solved in the context of many high-level problems including pose estimation, indoor localization, simultaneous localization and mapping and others. Methods that are accurate and adaptive and at the same time computationally efficient are required to enable location-based services in autonomous mobile devices. Such devices usually have a wide range of high-resolution sensors but only a limited processing power and constrained energy supply. This work introduces a novel high-level scan matching strategy that uses a combination of two advanced algorithms recently used in this field: cross-correlation and differential evolution. The cross-correlation between two laser range scans is used as an efficient measure of scan alignment and the differential evolution algorithm is used to search for the parameters of a transformation that aligns the scans. The proposed method was experimentally validated and showed good ability to match laser range scans taken shortly after each other and an excellent ability to match laser range scans taken with longer time intervals between them.Web of Science88art. no. 85
Fast Damage Recovery in Robotics with the T-Resilience Algorithm
Damage recovery is critical for autonomous robots that need to operate for a
long time without assistance. Most current methods are complex and costly
because they require anticipating each potential damage in order to have a
contingency plan ready. As an alternative, we introduce the T-resilience
algorithm, a new algorithm that allows robots to quickly and autonomously
discover compensatory behaviors in unanticipated situations. This algorithm
equips the robot with a self-model and discovers new behaviors by learning to
avoid those that perform differently in the self-model and in reality. Our
algorithm thus does not identify the damaged parts but it implicitly searches
for efficient behaviors that do not use them. We evaluate the T-Resilience
algorithm on a hexapod robot that needs to adapt to leg removal, broken legs
and motor failures; we compare it to stochastic local search, policy gradient
and the self-modeling algorithm proposed by Bongard et al. The behavior of the
robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using
only 25 tests on the robot and an overall running time of 20 minutes,
T-Resilience consistently leads to substantially better results than the other
approaches
Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping
Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments,
passing from teleoperated applications to autonomous applications like exploring
or navigating. For a robot to move through a particular location, it needs to gather information
about the scenario using sensors. These sensors allow the robot to observe, depending on the
sensor data type. Cameras mostly give information in two dimensions, with colors and pixels
representing an image. Range sensors give distances from the robot to obstacles. Depth
Cameras mix both technologies to expand their information to three-dimensional information.
Light Detection and Ranging (LiDAR) provides information about the distance to the sensor
but expands its range to planes and three dimensions alongside precision. So, mobile robots
use those sensors to scan the scenario while moving. If the robot already has a map, the sensors
measure, and the robot finds features that correspond to features on the map to localize
itself. Men have used Maps as a specialized form of representing the environment for more
than 5000 years, becoming a piece of important information in today’s daily basics. Maps are
used to navigate from one place to another, localize something inside some boundaries, or as
a form of documentation of essential features. So naturally, an intuitive way of making an
autonomous mobile robot is to implement geometrical information maps to represent the environment.
On the other hand, if the robot does not have a previous map, it should build it while
moving around. The robot computes the sensor information with the odometer sensor information
to achieve this task. However, sensors have their own flaws due to precision, calibration,
or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur
randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail
in the measurement, causing misalignment during the map building. A novel technique was
presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while
the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM).
Its goal is to build a map while the robot’s position is corrected based on the information of
two or more consecutive scans matched together or find the rigid registration vector between
them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless,
it is highly relevant in innovations, modifications, and adaptations due to the advances in new
sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan
matching algorithm aims to find a pose vector representing the transformation or movement
between two robot observations by finding the best possible value after solving an equation
representing a good transformation. It means searching for a solution in an optimum way. Typically
this optimization process has been solved using classical optimization algorithms, like
Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires
an initial guess or initial state that helps the algorithm point in the right direction, most of the
time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon
sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization
algorithms, those with a meta-heuristics definition based on iterative evolution that
mimics optimization processes that do not need previous information to search a limited range
for solutions to solve a fitness function. The main goal of this dissertation is to study, develop
and prove the benefits of evolutionary optimization algorithms in simultaneous localization and
mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information.
This work introduces several evolutionary algorithms for scan matching, acknowledge a
mixed fitness function for registration, solve simultaneous localization and matching in different
scenarios, implements loop closure and error relaxation, and proves its performance at indoors,
outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores
y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o
navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar
información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe,
según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en
dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan
distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas
tecnologías para expandir su información a información tridimensional. Light Detection and
Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a
planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan
esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los
sensores miden y el robot encuentra características que corresponden a características en dicho
mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada
de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de
información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para
navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación
de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer
un robot móvil autónomo es implementar mapas de información geométrica para representar el
entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza.
El robot junta la información del sensor de distancias con la información del sensor del
odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios
defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus
limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o
una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en
la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados
de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre
de los sensores mientras el robot construye el mapa, el algoritmo de localización y
mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición
del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar
el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y
desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones
y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las
aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene
como objetivo encontrar un vector de pose que represente la transformación o el movimiento
entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una
ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de
optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes
y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que
ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta
información de los sensores odometricos o sensores de inercia, aunque no siempre es posible
tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores
fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de
optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa
que imita los procesos de optimización que no necesitan información previa para buscar
dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El
objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización
evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de
seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios
algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema
de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas
en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche
Evolving a Behavioral Repertoire for a Walking Robot
Numerous algorithms have been proposed to allow legged robots to learn to
walk. However, the vast majority of these algorithms is devised to learn to
walk in a straight line, which is not sufficient to accomplish any real-world
mission. Here we introduce the Transferability-based Behavioral Repertoire
Evolution algorithm (TBR-Evolution), a novel evolutionary algorithm that
simultaneously discovers several hundreds of simple walking controllers, one
for each possible direction. By taking advantage of solutions that are usually
discarded by evolutionary processes, TBR-Evolution is substantially faster than
independently evolving each controller. Our technique relies on two methods:
(1) novelty search with local competition, which searches for both
high-performing and diverse solutions, and (2) the transferability approach,
which com-bines simulations and real tests to evolve controllers for a physical
robot. We evaluate this new technique on a hexapod robot. Results show that
with only a few dozen short experiments performed on the robot, the algorithm
learns a repertoire of con-trollers that allows the robot to reach every point
in its reachable space. Overall, TBR-Evolution opens a new kind of learning
algorithm that simultaneously optimizes all the achievable behaviors of a
robot.Comment: 33 pages; Evolutionary Computation Journal 201
- …