2,227 research outputs found
Monocular navigation for long-term autonomy
We present a reliable and robust monocular navigation system for an autonomous vehicle.
The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS.
Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach.
In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled.
We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound.
The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes.
This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation
Navigation without localisation: reliable teach and repeat based on the convergence theorem
We present a novel concept for teach-and-repeat visual navigation. The
proposed concept is based on a mathematical model, which indicates that in
teach-and-repeat navigation scenarios, mobile robots do not need to perform
explicit localisation. Rather than that, a mobile robot which repeats a
previously taught path can simply `replay' the learned velocities, while using
its camera information only to correct its heading relative to the intended
path. To support our claim, we establish a position error model of a robot,
which traverses a taught path by only correcting its heading. Then, we outline
a mathematical proof which shows that this position error does not diverge over
time. Based on the insights from the model, we present a simple monocular
teach-and-repeat navigation method. The method is computationally efficient, it
does not require camera calibration, and it can learn and autonomously traverse
arbitrarily-shaped paths. In a series of experiments, we demonstrate that the
method can reliably guide mobile robots in realistic indoor and outdoor
conditions, and can cope with imperfect odometry, landmark deficiency,
illumination variations and naturally-occurring environment changes.
Furthermore, we provide the navigation system and the datasets gathered at
http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
Navigation and Control of Automated Guided Vehicle using Fuzzy Inference System and Neural Network Technique
Automatic motion planning and navigation is the primary task of an Automated Guided Vehicle (AGV) or mobile robot. All such navigation systems consist of a data collection system, a decision making system and a hardware control system. Artificial Intelligence based decision making systems have become increasingly more successful as they are capable of handling large complex calculations and have a good performance under unpredictable and imprecise environments.
This research focuses on developing Fuzzy Logic and Neural Network based implementations for the navigation of an AGV by using heading angle and obstacle distances as inputs to generate the velocity and steering angle as output. The Gaussian, Triangular and Trapezoidal membership functions for the Fuzzy Inference System and the Feed forward back propagation were developed, modelled and simulated on MATLAB. The reserach presents an evaluation of the four different decision making systems and a study has been conducted to compare their performances.
The hardware control for an AGV should be robust and precise. For practical implementation a prototype, that functions via DC servo motors and a gear systems, was constructed and installed on a commercial vehicle
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
A generative traversability model for monocular robot self-guidance
The research work disclosed in this publication is partially funded by the Strategic Educational Pathways Scholarship (Malta). The scholarship is part-financed by the European Union - European Social Fund (ESF) under the Operational Programme II -
Cohesion Policy 2007-2013, Empowering People for More Jobs and a Better Quality of Life.In order for robots to be integrated into human active spaces and perform useful tasks, they must be capable of discriminating between traversable surfaces and obstacle regions in their surrounding environment. In this work, a principled semi-supervised (EM) framework is presented for the detection of traversable image regions for use on a low-cost monocular mobile robot. We propose a novel generative model for the occurrence of traversability cues, which are a measure of dissimilarity between safe-window and image superpixel features. Our classification results on both indoor and outdoor images sequences demonstrate its generality and adaptability to multiple environments through the online learning of an exponential mixture model. We show that this appearance-based vision framework is robust and can quickly and accurately estimate the probabilistic traversability of an image using no temporal information. Moreover, the reduction in safe-window size as compared to the state-of-the-art enables a self-guided monocular robot to roam in closer proximity of obstacles.peer-reviewe
Scene understanding for autonomous robots operating in indoor environments
Mención Internacional en el título de doctorThe idea of having robots among us is not new. Great efforts are continually made to
replicate human intelligence, with the vision of having robots performing different activities,
including hazardous, repetitive, and tedious tasks. Research has demonstrated that robots are
good at many tasks that are hard for us, mainly in terms of precision, efficiency, and speed.
However, there are some tasks that humans do without much effort that are challenging for
robots. Especially robots in domestic environments are far from satisfactorily fulfilling some
tasks, mainly because these environments are unstructured, cluttered, and with a variety of
environmental conditions to control.
This thesis addresses the problem of scene understanding in the context of autonomous
robots operating in everyday human environments. Furthermore, this thesis is developed
under the HEROITEA research project that aims to develop a robot system to help
elderly people in domestic environments as an assistant. Our main objective is to develop
different methods that allow robots to acquire more information from the environment to
progressively build knowledge that allows them to improve the performance on high-level
robotic tasks. In this way, scene understanding is a broad research topic, and it is considered
a complex task due to the multiple sub-tasks that are involved. In that context, in this thesis,
we focus on three sub-tasks: object detection, scene recognition, and semantic segmentation
of the environment.
Firstly, we implement methods to recognize objects considering real indoor environments.
We applied machine learning techniques incorporating uncertainties and more modern
techniques based on deep learning. Besides, apart from detecting objects, it is essential to
comprehend the scene where they can occur. For this reason, we propose an approach
for scene recognition that considers the influence of the detected objects in the prediction
process. We demonstrate that the exiting objects and their relationships can improve the
inference about the scene class. We also consider that a scene recognition model can
benefit from the advantages of other models. We propose a multi-classifier model for scene
recognition based on weighted voting schemes. The experiments carried out in real-world
indoor environments demonstrate that the adequate combination of independent classifiers
allows obtaining a more robust and precise model for scene recognition.
Moreover, to increase the understanding of a robot about its surroundings, we propose
a new division of the environment based on regions to build a useful representation of
the environment. Object and scene information is integrated into a probabilistic fashion
generating a semantic map of the environment containing meaningful regions within each
room. The proposed system has been assessed on simulated and real-world domestic
scenarios, demonstrating its ability to generate consistent environment representations.
Lastly, full knowledge of the environment can enhance more complex robotic tasks; that is
why in this thesis, we try to study how a complete knowledge of the environment influences
the robot’s performance in high-level tasks. To do so, we select an essential task, which
is searching for objects. This mundane task can be considered a precondition to perform
many complex robotic tasks such as fetching and carrying, manipulation, user requirements,
among others. The execution of these activities by service robots needs full knowledge of
the environment to perform each task efficiently. In this thesis, we propose two searching
strategies that consider prior information, semantic representation of the environment, and
the relationships between known objects and the type of scene. All our developments are
evaluated in simulated and real-world environments, integrated with other systems, and
operating in real platforms, demonstrating their feasibility to implement in real scenarios, and
in some cases outperforming other approaches. We also demonstrate how our representation
of the environment can boost the performance of more complex robotic tasks compared to
more standard environmental representations.La idea de tener robots entre nosotros no es nueva. Continuamente se realizan grandes
esfuerzos para replicar la inteligencia humana, con la visión de tener robots que realicen
diferentes actividades, incluidas tareas peligrosas, repetitivas y tediosas. La investigación ha
demostrado que los robots son buenos en muchas tareas que resultan difíciles para nosotros,
principalmente en términos de precisión, eficiencia y velocidad. Sin embargo, existen tareas
que los humanos realizamos sin mucho esfuerzo y que son un desafío para los robots.
Especialmente, los robots en entornos domésticos están lejos de cumplir satisfactoriamente
algunas tareas, principalmente porque estos entornos no son estructurados, pueden estar
desordenados y cuentan con una gran variedad de condiciones ambientales que controlar.
Esta tesis aborda el problema de la comprensión de la escena en el contexto de robots
autónomos que operan en entornos humanos cotidianos. Asimismo, esta tesis se desarrolla
en el marco del proyecto de investigación HEROITEA que tiene como objetivo desarrollar
un sistema robótico que funcione como asistente para ayudar a personas mayores en entornos
domésticos. Nuestro principal objetivo es desarrollar diferentes métodos que permitan a
los robots adquirir más información del entorno a fin de construir progresivamente un
conocimiento que les permita mejorar su desempeño en tareas robóticas más complejas.
En este sentido, la comprensión de escenas es un tema de investigación amplio, y se
considera una tarea compleja debido a las múltiples subtareas involucradas. En esta tesis
nos enfocamos específicamente en tres subtareas: detección de objetos, reconocimiento de
escenas y etiquetado semántico del entorno.
Por un lado, implementamos métodos para el reconocimiento de objectos considerando
entornos interiores reales. Aplicamos técnicas de aprendizaje automático incorporando
incertidumbres y técnicas más modernas basadas en aprendizaje profundo. Además, aparte
de detectar objetos, es fundamental comprender la escena donde estos se encuentran. Por esta
razón, proponemos un modelo para el reconocimiento de escenas que considera la influencia
de los objetos detectados en el proceso de predicción. Demostramos que los objetos existentes
y sus relaciones pueden mejorar el proceso de inferencia de la categoría de la escena. También
consideramos que un modelo de reconocimiento de escenas puede beneficiarse de las ventajas
de otros modelos. Por ello, proponemos un multiclasificador para el reconocimiento de escenas basado en esquemas de votación ponderados. Los experimentos llevados a cabo
en entornos interiores reales demuestran que la combinación adecuada de clasificadores
independientes permite obtener un modelo más robusto y preciso para el reconocimiento
de escenas.
Adicionalmente, para aumentar la comprensión de un robot acerca de su entorno,
proponemos una nueva división del entorno basada en regiones a fin de construir una
representación útil del entorno. La información de objetos y de la escena se integra de forma
probabilística generando un mapa semántico que contiene regiones significativas dentro de
cada habitación. El sistema propuesto ha sido evaluado en entornos domésticos simulados y
reales, demostrando su capacidad para generar representaciones consistentes del entorno.
Por otro lado, el conocimiento integral del entorno puede mejorar tareas robóticas más
complejas; es por ello que en esta tesis analizamos cómo el conocimiento completo del
entorno influye en el desempeño del robot en tareas de alto nivel. Para ello, seleccionamos una
tarea fundamental, que es la búsqueda de objetos. Esta tarea mundana puede considerarse
una condición previa para realizar diversas tareas robóticas complejas, como transportar
objetos, tareas de manipulación, atender requerimientos del usuario, entre otras. La
ejecución de estas actividades por parte de robots de servicio requiere un conocimiento
profundo del entorno para realizar cada tarea de manera eficiente. En esta tesis proponemos
dos estrategias de búsqueda de objetos que consideran información previa, la representación
semántica del entorno, las relaciones entre los objetos conocidos y el tipo de escena. Todos
nuestros desarrollos son evaluados en entornos simulados y reales, integrados con otros
sistemas y operando en plataformas reales, demostrando su viabilidad de ser implementados
en escenarios reales y, en algunos casos, superando a otros enfoques. También demostramos
cómo nuestra representación del entorno puede mejorar el desempeño de tareas robóticas
más complejas en comparación con representaciones del entorno más tradicionales.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Carlos Balaguer Bernaldo de Quirós.- Secretario: Fernando Matía Espada.- Vocal: Klaus Strob
Cyber-Agricultural Systems for Crop Breeding and Sustainable Production
The Cyber-Agricultural System (CAS) Represents an overarching Framework of Agriculture that Leverages Recent Advances in Ubiquitous Sensing, Artificial Intelligence, Smart Actuators, and Scalable Cyberinfrastructure (CI) in Both Breeding and Production Agriculture. We Discuss the Recent Progress and Perspective of the Three Fundamental Components of CAS – Sensing, Modeling, and Actuation – and the Emerging Concept of Agricultural Digital Twins (DTs). We Also Discuss How Scalable CI is Becoming a Key Enabler of Smart Agriculture. in This Review We Shed Light on the Significance of CAS in Revolutionizing Crop Breeding and Production by Enhancing Efficiency, Productivity, Sustainability, and Resilience to Changing Climate. Finally, We Identify Underexplored and Promising Future Directions for CAS Research and Development
A survey on active simultaneous localization and mapping: state of the art and new frontiers
Active simultaneous localization and mapping (SLAM) is the problem of planning and controlling the motion of a robot to build the most accurate and complete model of the surrounding environment. Since the first foundational work in active perception appeared, more than three decades ago, this field has received increasing attention across different scientific communities. This has brought about many different approaches and formulations, and makes a review of the current trends necessary and extremely valuable for both new and experienced researchers. In this article, we survey the state of the art in active SLAM and take an in-depth look at the open challenges that still require attention to meet the needs of modern applications. After providing a historical perspective, we present a unified problem formulation and review the well-established modular solution scheme, which decouples the problem into three stages that identify, select, and execute potential navigation actions. We then analyze alternative approaches, including belief-space planning and deep reinforcement learning techniques, and review related work on multirobot coordination. This article concludes with a discussion of new research directions, addressing reproducible research, active spatial perception, and practical applications, among other topics
- …