202 research outputs found
Visual Localization and Mapping in Dynamic and Changing Environments
The real-world deployment of fully autonomous mobile robots depends on a
robust SLAM (Simultaneous Localization and Mapping) system, capable of handling
dynamic environments, where objects are moving in front of the robot, and
changing environments, where objects are moved or replaced after the robot has
already mapped the scene. This paper presents Changing-SLAM, a method for
robust Visual SLAM in both dynamic and changing environments. This is achieved
by using a Bayesian filter combined with a long-term data association
algorithm. Also, it employs an efficient algorithm for dynamic keypoints
filtering based on object detection that correctly identify features inside the
bounding box that are not dynamic, preventing a depletion of features that
could cause lost tracks. Furthermore, a new dataset was developed with RGB-D
data especially designed for the evaluation of changing environments on an
object level, called PUC-USP dataset. Six sequences were created using a mobile
robot, an RGB-D camera and a motion capture system. The sequences were designed
to capture different scenarios that could lead to a tracking failure or a map
corruption. To the best of our knowledge, Changing-SLAM is the first Visual
SLAM system that is robust to both dynamic and changing environments, not
assuming a given camera pose or a known map, being also able to operate in real
time. The proposed method was evaluated using benchmark datasets and compared
with other state-of-the-art methods, proving to be highly accurate.Comment: 14 pages, 13 figure
Towards a robust slam framework for resilient AUV navigation
Autonomous Underwater Vehicles (AUVs) are playing an increasing part in modern
navies, to the point that the control of oceans will soon be decided by their strategic
use. In face of more complex missions occurring in potentially hostile environments,
the resilience of such systems becomes critical. In this study, we investigate the
following scenario: how does a lone AUV could recover from a temporary breakdown
that has created a gap in its measurements, while remaining beneath the surface to
avoid detection? It is assumed that the AUV is equipped with an active sonar and
is operating in an uncharted area. The vehicle has to rely on itself by recovering
its location using a Simultaneous Localization and Mapping (SLAM) algorithm.
While SLAM is widely investigated and developed in the case of aerial and terrestrial
robotics, the nature of the poorly structured underwater environment dramatically
challenges its effectiveness. To address such a complex problem, the usual side
scan sonar data association techniques are investigated under a global registration
problem while applying robust graph SLAM modelling. In particular, ways to
improve the global detection of features from sonar mosaic region patches that react
well to the MICR similarity measure are discussed. The main contribution of this
study is centered on a novel data processing framework that is able to generate
different graph topologies using robust SLAM techniques. One of its advantages is to
facilitate the testing of different modelling hypotheses to tackle the data gap following
the temporary breakdown and make the most of the limited available information.
Several research perspectives related to this framework are discussed. Notably, the
possibility to further extend the proposed framework to heterogeneous datasets and
the opportunity to accelerate the recovery process by inferring information about
the breakdown using machine learning.PH
Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
Robust state estimation methods for robotics applications
State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem
Dataset of Panoramic Images for People Tracking in Service Robotics
We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility
Development of an adaptive navigation system for indoor mobile handling and manipulation platforms
A fundamental technology enabling the autonomous behavior of mobile robotics is navigation. It is a main prerequisite for mobile robotics to fulfill high-level tasks such as handling and manipulation, and is often identified as one of the key challenges in mobile robotics. The mapping and localization as the basis for navigation are intensively researched in the last few decades. However, there are still challenges or problems needed to be solved for online operating in large-scale environments or running on low-cost and energy-saving embedded systems. In this work, new developments and usages of Light Detection And Ranging (LiDAR) based Simultaneous Localization And Mapping (SLAM) algorithms are presented. A key component of LiDAR based SLAM algorithms, the scan matching algorithm, is explored. Different scan matching algorithms are systemically experimented with different LiDARs for indoor home-like environments for the first time. The influence of properties of LiDARs in scan matching algorithms is quantitatively analyzed. Improvements to Bayes filter based and graph optimization based SLAMs are presented. The Bayes filter based SLAMs mainly use the current sensor information to find the best estimation. A new efficient implementation of Rao-Blackwellized Particle Filter based SLAM is presented. It is based on a pre-computed lookup table and the parallelization of the particle updating. The new implementation runs efficiently on recent multi-core embedded systems that fulfill low cost and energy efficiency requirements. In contrast to Bayes filter based methods, graph optimization based SLAMs utilize all the sensor information and minimize the total error in the system. A new real-time graph building model and a robust integrated Graph SLAM solution are presented. The improvements include the definition of unique direction norms for points or lines extracted from scans, an efficient loop closure detection algorithm, and a parallel and adaptive implementation. The developed algorithm outperforms the state-of-the-art algorithms in processing time and robustness especially in large-scale environments using embedded systems instead of high-end computation devices. The results of the work can be used to improve the navigation system of indoor autonomous robots, like domestic environments and intra-logistics.Eine der grundlegenden Funktionen, welche die Autonomie in der mobilen Robotik ermöglicht, ist die Navigation. Sie ist eine wesentliche Voraussetzung dafür, dass mobile Roboter selbständig anspruchsvolle Aufgaben erfüllen können. Die Umsetzung der Navigation wird dabei oft als eine der wichtigsten Herausforderungen identifiziert. Die Kartenerstellung und Lokalisierung als Grundlage für die Navigation wurde in den letzten Jahrzehnten intensiv erforscht. Es existieren jedoch immer noch eine Reihe von Problemen, z.B. die Anwendung auf große Areale oder bei der Umsetzung auf kostengünstigen und energiesparenden Embedded-Systemen. Diese Arbeit stellt neue Ansätze und Lösungen im Bereich der LiDAR-basierten simultanen Positionsbestimmung und Kartenerstellung (SLAM) vor. Eine Schlüsselkomponente der LiDAR-basierten SLAM, die so genannten Scan-Matching-Algorithmen, wird näher untersucht. Verschiedene Scan-Matching-Algorithmen werden zum ersten Mal systematisch mit verschiedenen LiDARs für den Innenbereich getestet. Der Einfluss von LiDARs auf die Eigenschaften der Algorithmen wird quantitativ analysiert. Verbesserungen an Bayes-filterbasierten und graphoptimierten SLAMs werden in dieser Arbeit vorgestellt. Bayes-filterbasierte SLAMs verwenden hauptsächlich die aktuellen Sensorinformationen, um die beste Schätzung zu finden. Eine neue effiziente Implementierung des auf Partikel-Filter basierenden SLAM unter der Verwendung einer Lookup-Tabelle und der Parallelisierung wird vorgestellt. Die neue Implementierung kann effizient auf aktuellen Embedded-Systemen laufen. Im Gegensatz dazu verwenden Graph-SLAMs alle Sensorinformationen und minimieren den Gesamtfehler im System. Ein neues Echtzeitmodel für die Grafenerstellung und eine robuste integrierte SLAM-Lösung werden vorgestellt. Die Verbesserungen umfassen die Definition von eindeutigen Richtungsnormen für Scan, effiziente Algorithmen zur Erkennung von Loop Closures und eine parallele und adaptive Implementierung. Der entwickelte und auf eingebetteten Systemen eingesetzte Algorithmus übertrifft die aktuellen Algorithmen in Geschwindigkeit und Robustheit, insbesondere für große Areale. Die Ergebnisse der Arbeit können für die Verbesserung der Navigation von autonomen Robotern im Innenbereich, häuslichen Umfeld sowie der Intra-Logistik genutzt werden
Articulated human tracking and behavioural analysis in video sequences
Recently, there has been a dramatic growth of interest in the observation and tracking
of human subjects through video sequences. Arguably, the principal impetus has come
from the perceived demand for technological surveillance, however applications in entertainment,
intelligent domiciles and medicine are also increasing. This thesis examines
human articulated tracking and the classi cation of human movement, rst separately
and then as a sequential process.
First, this thesis considers the development and training of a 3D model of human body
structure and dynamics. To process video sequences, an observation model is also designed
with a multi-component likelihood based on edge, silhouette and colour. This is de ned on
the articulated limbs, and visible from a single or multiple cameras, each of which may be
calibrated from that sequence. Second, for behavioural analysis, we develop a methodology
in which actions and activities are described by semantic labels generated from a Movement
Cluster Model (MCM). Third, a Hierarchical Partitioned Particle Filter (HPPF) was
developed for human tracking that allows multi-level parameter search consistent with the
body structure. This tracker relies on the articulated motion prediction provided by the
MCM at pose or limb level. Fourth, tracking and movement analysis are integrated to
generate a probabilistic activity description with action labels.
The implemented algorithms for tracking and behavioural analysis are tested extensively
and independently against ground truth on human tracking and surveillance
datasets. Dynamic models are shown to predict and generate synthetic motion, while
MCM recovers both periodic and non-periodic activities, de ned either on the whole body
or at the limb level. Tracking results are comparable with the state of the art, however
the integrated behaviour analysis adds to the value of the approach.Overseas Research Students Awards Scheme (ORSAS
Inertially-Controlled Two-dimensional Phased Arrays by Exploiting Artificial Neural Networks and Ultra-Low-Power AI-based Microcontrollers
The use of Artificial Intelligence (AI) in electronics and electromagnetics is opening many attractive research opportunities related to the smart control of phased arrays. This is particularly challenging especially in some high-mobility contexts, such as drones, 5G, automotive, where the response time is crucial. In this paper a novel method combining AI with mathematical models and firmware for orientation estimation is proposed. The goal is to control two-dimensional phased arrays using an Inertial Measurement Unit (IMU) by exploiting a feed-forward neural network. The neural network takes the IMU-based beam direction as input and returns the related phase shift matrix. To make the method computationally efficient, the network structure is carefully chosen. Specific and discretized cross-section regions of the array factor (AF) main lobe are considered to compute the phase shift matrices, used in turn to train the neural network. This approach achieves a balance between the number of phase-shifting processes and spatial resolution. Without loss of generality, the proposed method has been tested and verified on 4× 4 and 6× 6 arrays of 2.4 GHz antennas. The obtained results demonstrate that reconfigurability time, easiness of use, and scalability are suitable for a wide range of high-mobility applications
Symmetry-Adapted Machine Learning for Information Security
Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis
Rotorcraft flight-propulsion control integration: An eclectic design concept
The NASA Ames and Lewis Research Centers, in conjunction with the Army Research and Technology Laboratories, have initiated and partially completed a joint research program focused on improving the performance, maneuverability, and operating characteristics of rotorcraft by integrating the flight and propulsion controls. The background of the program, its supporting programs, its goals and objectives, and an approach to accomplish them are discussed. Results of the modern control governor design of the General Electric T700 engine and the Rotorcraft Integrated Flight-Propulsion Control Study, which were key elements of the program, are also presented
- …