1,341 research outputs found
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Runtime resource management for vision-based applications in mobile robots
Computer-vision (CV) applications are an important part of mobile robot automation, analyzing the perceived raw data from vision sensors and providing a rich amount of information on the surrounding environment. The design of a high-speed and energy-efficient CV application for a resource-constrained mobile robot, while maintaining a certain targeted level of accuracy in computation, is a challenging task. This is because such applications demand a lot of resources, e.g. computing capacity and battery energy, to run seamlessly in real time. Moreover, there is always a trade-off between accuracy, performance and energy consumption, as these factors dynamically affect each other at runtime. In this thesis, we investigate novel runtime resource management approaches to improve performance and energy efficiency of vision-based applications in mobile robots. Due to the dynamic correlation between different management objectives, such as energy consumption and execution time, both environmental and computational observations need to be dynamically updated, and the actuators are manipulated at runtime based on these observations. Algorithmic and computational parameters of a CV application (output accuracy and CPU voltage/frequency) are adjusted by measuring the key factors associated with the intensity of computations and strain on CPUs (environmental complexity and instantaneous power). Furthermore, we show how mechanical characteristics of the robot, i.e. the speed of movement in this thesis, can affect the computational behaviour. Based on this investigation, we add the speed of a robot, as an actuator, to our resource management algorithm besides the considered computational knobs (output accuracy and CPU voltage/frequency). To evaluate the proposed approach, we perform several experiments on an unmanned ground vehicle equipped with an embedded computer board and use RGB and event cameras as the vision sensors for CV applications. The obtained results show that the presented management strategy improves the performance and accuracy of vision-based applications while significantly reducing the energy consumption compared with the state-of-the-art solutions. Moreover, we demonstrate that considering simultaneously both computational and mechanical aspects in management of CV applications running on mobile robots significantly reduces the energy consumption compared with similar methods that consider these two aspects separately, oblivious to each other’s outcome
Review and classification of vision-based localisation techniques in unknown environments
International audienceThis study presents a review of the state-of-the-art and a novel classification of current vision-based localisation techniques in unknown environments. Indeed, because of progresses made in computer vision, it is now possible to consider vision-based systems as promising navigation means that can complement traditional navigation sensors like global navigation satellite systems (GNSSs) and inertial navigation systems. This study aims to review techniques employing a camera as a localisation sensor, provide a classification of techniques and introduce schemes that exploit the use of video information within a multi-sensor system. In fact, a general model is needed to better compare existing techniques in order to decide which approach is appropriate and which are the innovation axes. In addition, existing classifications only consider techniques based on vision as a standalone tool and do not consider video as a sensor among others. The focus is addressed to scenarios where no a priori knowledge of the environment is provided. In fact, these scenarios are the most challenging since the system has to cope with objects as they appear in the scene without any prior information about their expected position
Sensors, SLAM and Long-term Autonomy: A Review
Simultaneous Localization and Mapping, commonly known as SLAM, has been an active research area in the field of Robotics over the past three decades. For solving the SLAM problem, every robot is equipped with either a single sensor or a combination of similar/different sensors. This paper attempts to review, discuss, evaluate and compare these sensors. Keeping an eye on future, this paper also assesses the characteristics of these sensors against factors critical to the long-term autonomy challenge
Creation and maintenance of visual incremental maps and hierarchical localization.
Over the last few years, the presence of the mobile robotics has considerably
increased in a wide variety of environments. It is common to find robots that carry
out repetitive and specific applications and also, they can be used for working at
dangerous environments and to perform precise tasks. These robots can be
found in a variety of social environments, such as industry, household,
educational and health scenarios. For that reason, they need a specific and
continuous research and improvement work. Specifically, autonomous mobile
robots require a very precise technology to perform tasks without human
assistance.
To perform tasks autonomously, the robots must be able to navigate in an
unknown environment. For that reason, the autonomous mobile robots must be
able to address the mapping and localization tasks: they must create a model of
the environment and estimate their position and orientation.
This PhD thesis proposes and analyses different methods to carry out the map
creation and the localization tasks in indoor environments. To address these
tasks only visual information is used, specifically, omnidirectional images, with a
360º field of view. Throughout the chapters of this document solutions for
autonomous navigation tasks are proposed, they are solved using
transformations in the images captured by a vision system mounted on the robot.
Firstly, the thesis focuses on the study of the global appearance descriptors in
the localization task. The global appearance descriptors are algorithms that
transform an image globally, into a unique vector. In these works, a deep
comparative study is performed. In the experiments different global appearance
descriptors are used along with omnidirectional images and the results are
compared. The main goal is to obtain an optimized algorithm to estimate the robot
position and orientation in real indoor environments. The experiments take place
with real conditions, so some visual changes in the scenes can occur, such as
camera defects, furniture or people movements and changes in the lighting
conditions. The computational cost is also studied; the idea is that the robot has
to localize the robot in an accurate mode, but also, it has to be fast enought.
Additionally, a second application, whose goal is to carry out an incremental
mapping in indoor environments, is presented. This application uses the best
global appearance descriptors used in the localization task, but this time they are
constructed with the purpose of solving the mapping problem using an
incremental clustering technique. The application clusters a batch of images that
are visually similar; every group of images or cluster is expected to identify a zone
of the environment. The shape and size of the cluster can vary while the robot is
visiting the different rooms. Nowadays. different algorithms can be used to obtain
the clusters, but all these solutions usually work properly when they work ‘offline’,
starting from the whole set of data to cluster. The main idea of this study is
to obtain the map incrementally while the robot explores the new environment.
Carrying out the mapping incrementally while the robot is still visiting the area is very interesting since having the map separated into nodes with relationships of
similitude between them can be used subsequently for the hierarchical
localization tasks, and also, to recognize environments already visited in the
model.
Finally, this PhD thesis includes an analysis of deep learning techniques for
localization tasks. Particularly, siamese networks have been studied. Siamese
networks are based on classic convolutional networks, but they permit evaluating
two images simultaneously. These networks output a similarity value between the
input images, and that information can be used for the localization tasks.
Throughout this work the technique is presented, the possible architectures are
analysed and the results after the experiments are shown and compared. Using
the siamese networks, the localization in real operation conditions and
environments is solved, focusing on improving the performance against
illumination changes on the scene. During the experiments the room retrieval
problem, the hierarchical localization and the absolute localization have been
solved.Durante los últimos años, la presencia de la robótica móvil ha aumentado
substancialmente en una gran variedad de entornos y escenarios. Es habitual
encontrar el uso de robots para llevar a cabo aplicaciones repetitivas y
especÃficas, asà como tareas en entornos peligrosos o con resultados que deben
ser muy precisos. Dichos robots se pueden encontrar tanto en ámbitos
industriales como en familiares, educativos y de salud; por ello, requieren un
trabajo especÃfico y continuo de investigación y mejora. En concreto, los robots
móviles autónomos requieren de una tecnologÃa precisa para desarrollar tareas
sin ayuda del ser humano.
Para realizar tareas de manera autónoma, los robots deben ser capaces de
navegar por un entorno ‘a priori’ desconocido. Por tanto, los robots móviles
autónomos deben ser capaces de realizar la tarea de creación de mapas,
creando un modelo del entorno y la tarea de localización, esto es estimar su
posición y orientación.
La presente tesis plantea un diseño y análisis de diferentes métodos para realizar
las tareas de creación de mapas y localización en entornos de interior. Para estas
tareas se emplea únicamente información visual, en concreto, imágenes
omnidireccionales, con un campo de visión de 360º. En los capÃtulos de este
trabajo se plantean soluciones a las tareas de navegación autónoma del robot
mediante transformaciones en las imágenes que este es capaz de captar.
En cuanto a los trabajos realizados, en primer lugar, se presenta un estudio de
descriptores de apariencia global en tareas de localización. Los descriptores de
apariencia global son transformaciones capaces de obtener un único vector que
describa globalmente una imagen. En este trabajo se realiza un estudio
exhaustivo de diferentes métodos de apariencia global adaptando su uso a
imágenes omnidireccionales. Se trata de obtener un algoritmo optimizado para
estimar la posición y orientación del robot en entornos reales de oficina, donde
puede surgir cambios visuales en el entorno como movimientos de cámara, de
mobiliario o de iluminación en la escena. También se evalúa el tiempo empleado
para realizar esta estimación, ya que el trabajo de un robot debe ser preciso,
pero también factible en cuanto a tiempos de computación.
Además, se presenta una segunda aplicación donde el estudio se centra en la
creación de mapas de entornos de interior de manera incremental. Esta
aplicación hace uso de los descriptores de apariencia global estudiados para la
tarea de localización, pero en este caso se utilizan para la construcción de mapas
utilizando la técnica de ‘clustering’ incremental. En esta aplicación, conjuntos de
imágenes visualmente similares se agrupan en un único grupo. La forma y
cantidad de grupos es variable conforme el robot avanza en el entorno.
Actualmente, existen diferentes algoritmos para obtener la separación de un
entorno en nodos, pero las soluciones efectivas se realizan de manera ‘off-line’,
es decir, a posteriori una vez se tienen todas las imágenes captadas. El trabajo
presentado permite realizar esta tarea de manera incremental mientras el robot explora el nuevo entorno. Realizar esta tarea mientras se visita el resto del
entorno puede ser muy interesante ya que tener el mapa separado por nodos
con relaciones de proximidad entre ellos se puede ir utilizando para tareas de
localización jerárquica. Además, es posible reconocer entornos ya visitados o
similares a nodos pasados.
Por último, la tesis también incluye el estudio de técnicas de aprendizaje
profundo (‘deep learning’) para tareas de localización. En concreto, se estudia el
uso de las redes siamesas, una técnica poco explorada en robótica móvil, que
está basada en las clásicas redes convolucionales, pero en la que dos imágenes
son evaluadas al mismo tiempo. Estas redes dan un valor de similitud entre el
par de imágenes de entrada, lo que permite realizar tareas de localización visual.
En este trabajo se expone esta técnica, se presentan las estructuras que pueden
tener estas redes y los resultados tras la experimentación. Se evalúa la tarea de
localización en entornos heterogéneos en los que el principal problema viene
dado por cambios en la iluminación de la escena. Con las redes siamesas se
trata de resolver el problema de estimación de estancia, el problema de
localización jerárquica y el de localización absoluta
Marvin: an Innovative Omni-Directional Robotic Assistant for Domestic Environments
Population ageing and pandemics recently demonstrate to cause isolation of
elderly people in their houses, generating the need for a reliable assistive
figure. Robotic assistants are the new frontier of innovation for domestic
welfare, and elderly monitoring is one of the services a robot can handle for
collective well-being. Despite these emerging needs, in the actual landscape of
robotic assistants there are no platform which successfully combines a reliable
mobility in cluttered domestic spaces, with lightweight and offline Artificial
Intelligence (AI) solutions for perception and interaction. In this work, we
present Marvin, a novel assistive robotic platform we developed with a modular
layer-based architecture, merging a flexible mechanical design with
cutting-edge AI for perception and vocal control. We focus the design of Marvin
on three target service functions: monitoring of elderly and reduced-mobility
subjects, remote presence and connectivity, and night assistance. Compared to
previous works, we propose a tiny omnidirectional platform, which enables agile
mobility and effective obstacle avoidance. Moreover, we design a controllable
positioning device, which easily allows the user to access the interface for
connectivity and extends the visual range of the camera sensor. Nonetheless, we
delicately consider the privacy issues arising from private data collection on
cloud services, a critical aspect of commercial AI-based assistants. To this
end, we demonstrate how lightweight deep learning solutions for visual
perception and vocal command can be adopted, completely running offline on the
embedded hardware of the robot.Comment: 20 pages, 9 figures, 3 tabl
- …