2,865 research outputs found
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
An Audio-visual Solution to Sound Source Localization and Tracking with Applications to HRI
Robot audition is an emerging and growing branch in the robotic community and is necessary for a natural Human-Robot Interaction (HRI). In this paper, we propose a framework that integrates advances from Simultaneous Localization And Mapping (SLAM), bearing-only target tracking, and robot audition techniques into a unifed system for sound source identification, localization, and tracking. In indoors, acoustic observations are often highly noisy and corrupted due to reverberations, the robot ego-motion and background noise, and possible discontinuous nature of them. Therefore, in everyday interaction scenarios, the system requires accommodating for outliers, robust data association, and appropriate management of the landmarks, i.e. sound sources. We solve the robot self-localization and environment representation problems using an RGB-D SLAM algorithm, and sound source localization and tracking using recursive Bayesian estimation in the form of the extended Kalman Filter with unknown data associations and an unknown number of landmarks. The experimental results show that the proposed system performs well in the medium-sized cluttered indoor environment
RT-SLAM: A Generic and Real-Time Visual SLAM Implementation
This article presents a new open-source C++ implementation to solve the SLAM
problem, which is focused on genericity, versatility and high execution speed.
It is based on an original object oriented architecture, that allows the
combination of numerous sensors and landmark types, and the integration of
various approaches proposed in the literature. The system capacities are
illustrated by the presentation of an inertial/vision SLAM approach, for which
several improvements over existing methods have been introduced, and that copes
with very high dynamic motions. Results with a hand-held camera are presented.Comment: 10 page
Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations
Size, weight, and power constrained platforms impose constraints on
computational resources that introduce unique challenges in implementing
localization algorithms. We present a framework to perform fast localization on
such platforms enabled by the compressive capabilities of Gaussian Mixture
Model representations of point cloud data. Given raw structural data from a
depth sensor and pitch and roll estimates from an on-board attitude reference
system, a multi-hypothesis particle filter localizes the vehicle by exploiting
the likelihood of the data originating from the mixture model. We demonstrate
analysis of this likelihood in the vicinity of the ground truth pose and detail
its utilization in a particle filter-based vehicle localization strategy, and
later present results of real-time implementations on a desktop system and an
off-the-shelf embedded platform that outperform localization results from
running a state-of-the-art algorithm on the same environment
On the use of autonomous unmanned vehicles in response to hazardous atmospheric release incidents
Recent events have induced a surge of interest in the methods of response to releases of hazardous materials or gases into the atmosphere. In the last decade there has been particular interest in mapping and quantifying emissions for regulatory purposes, emergency response, and environmental monitoring. Examples include: responding to events such as gas leaks, nuclear accidents or chemical, biological or radiological (CBR) accidents or attacks, and even exploring sources of methane emissions on the planet Mars. This thesis presents a review of the potential responses to hazardous releases, which includes source localisation, boundary tracking, mapping and source term estimation. [Continues.]</div
Autonomous navigation for guide following in crowded indoor environments
The requirements for assisted living are rapidly changing as the number of elderly
patients over the age of 60 continues to increase. This rise places a high level of stress on
nurse practitioners who must care for more patients than they are capable. As this trend is
expected to continue, new technology will be required to help care for patients. Mobile
robots present an opportunity to help alleviate the stress on nurse practitioners by
monitoring and performing remedial tasks for elderly patients. In order to produce
mobile robots with the ability to perform these tasks, however, many challenges must be
overcome.
The hospital environment requires a high level of safety to prevent patient injury. Any
facility that uses mobile robots, therefore, must be able to ensure that no harm will come
to patients whilst in a care environment. This requires the robot to build a high level of
understanding about the environment and the people with close proximity to the robot.
Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders.
3D time-of-flight sensors have recently been introduced and provide dense 3D point
clouds of the environment at real-time frame rates. This provides mobile robots with
previously unavailable dense information in real-time. I investigate the use of time-of-flight
cameras for mobile robot navigation in crowded environments in this thesis. A
unified framework to allow the robot to follow a guide through an indoor environment
safely and efficiently is presented. Each component of the framework is analyzed in
detail, with real-world scenarios illustrating its practical use.
Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems
that must be overcome to receive consistent and accurate data. I propose a novel and
practical probabilistic framework to overcome many of the inherent problems in this
thesis. The framework fuses multiple depth maps with color information forming a
reliable and consistent view of the world. In order for the robot to interact with the
environment, contextual information is required. To this end, I propose a region-growing
segmentation algorithm to group points based on surface characteristics, surface normal
and surface curvature. The segmentation process creates a distinct set of surfaces,
however, only a limited amount of contextual information is available to allow for
interaction. Therefore, a novel classifier is proposed using spherical harmonics to
differentiate people from all other objects.
The added ability to identify people allows the robot to find potential candidates to
follow. However, for safe navigation, the robot must continuously track all visible
objects to obtain positional and velocity information. A multi-object tracking system is
investigated to track visible objects reliably using multiple cues, shape and color. The
tracking system allows the robot to react to the dynamic nature of people by building an
estimate of the motion flow. This flow provides the robot with the necessary information
to determine where and at what speeds it is safe to drive. In addition, a novel search
strategy is proposed to allow the robot to recover a guide who has left the field-of-view.
To achieve this, a search map is constructed with areas of the environment ranked
according to how likely they are to reveal the guide’s true location. Then, the robot can
approach the most likely search area to recover the guide. Finally, all components
presented are joined to follow a guide through an indoor environment. The results
achieved demonstrate the efficacy of the proposed components
- …