2,741 research outputs found
Multisensor-based human detection and tracking for mobile service robots
The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms.
Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments
Real-time Spatial Detection and Tracking of Resources in a Construction Environment
Construction accidents with heavy equipment and bad decision making can be based on poor knowledge of the site environment and in both cases may lead to work interruptions and costly delays. Supporting the construction environment with real-time generated three-dimensional (3D) models can help preventing accidents as well as support management by modeling infrastructure assets in 3D. Such models can be integrated in the path planning of construction equipment operations for obstacle avoidance or in a 4D model that simulates construction processes. Detecting and guiding resources, such as personnel, machines and materials in and to the right place on time requires methods and technologies supplying information in real-time. This paper presents research in real-time 3D laser scanning and modeling using high range frame update rate scanning technology. Existing and emerging sensors and techniques in three-dimensional modeling are explained. The presented research successfully developed computational models and algorithms for the real-time detection, tracking, and three-dimensional modeling of static and dynamic construction resources, such as workforce, machines, equipment, and materials based on a 3D video range camera. In particular, the proposed algorithm for rapidly modeling three-dimensional scenes is explained. Laboratory and outdoor field experiments that were conducted to validate the algorithmās performance and results are discussed
A bank of unscented Kalman filters for multimodal human perception with mobile service robots
A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints.
In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot.
Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics
leave a trace - A People Tracking System Meets Anomaly Detection
Video surveillance always had a negative connotation, among others because of
the loss of privacy and because it may not automatically increase public
safety. If it was able to detect atypical (i.e. dangerous) situations in real
time, autonomously and anonymously, this could change. A prerequisite for this
is a reliable automatic detection of possibly dangerous situations from video
data. This is done classically by object extraction and tracking. From the
derived trajectories, we then want to determine dangerous situations by
detecting atypical trajectories. However, due to ethical considerations it is
better to develop such a system on data without people being threatened or even
harmed, plus with having them know that there is such a tracking system
installed. Another important point is that these situations do not occur very
often in real, public CCTV areas and may be captured properly even less. In the
artistic project leave a trace the tracked objects, people in an atrium of a
institutional building, become actor and thus part of the installation.
Visualisation in real-time allows interaction by these actors, which in turn
creates many atypical interaction situations on which we can develop our
situation detection. The data set has evolved over three years and hence, is
huge. In this article we describe the tracking system and several approaches
for the detection of atypical trajectories
Recommended from our members
SLAM in Dynamic Environments: A Deep Learning Approach for Moving Object Tracking Using ML-RANSAC Algorithm
The important problem of Simultaneous Localization and Mapping (SLAM) in dynamic environments is less studied than the counterpart problem in static settings. In this paper, we present a solution for the feature-based SLAM problem in dynamic environments. We propose an algorithm that integrates SLAM with multi-target tracking (SLAMMTT) using a robust feature-tracking algorithm for dynamic environments. A novel implementation of RANdomSAmple Consensus (RANSAC) method referred to as multilevel-RANSAC (ML-RANSAC) within the Extended Kalman Filter (EKF) framework is applied for multi-target tracking (MTT). We also apply machine learning to detect features from the input data and to distinguish moving from stationary objects. The data stream from LIDAR and vision sensors are fused in real-time to detect objects and depth information. A practical experiment is designed to verify the performance of the algorithm in a dynamic environment. The unique feature of this algorithm is its ability to maintain tracking of features even when the observations are intermittent whereby many reported algorithms fail in such situations. Experimental validation indicates that the algorithm is able to perform consistent estimates in a fast and robust manner suggesting its feasibility for real-time applications
Target tracking using laser range finder with occlusion
Mestrado em Engenharia MecĆ¢nicaEste trabalho apresenta uma tĆ©cnica para a detecĆ§Ć£o e seguimento de
mĆŗltiplos alvos mĆ³veis usando um sensor de distĆ¢ncias laser em situaƧƵes de
forte oclusĆ£o. O processo inicia-se com a aplicaĆ§Ć£o de filtros temporais aos
dados em bruto de modo a eliminar o ruĆdo do sensor seguindo-se de uma
segmentaĆ§Ć£o em vĆ”rias fases com o objectivo de contornar o problema da
oclusĆ£o. Os segmentos obtidos representam objectos presentes no ambiente.
Para cada segmento um ponto representativo da sua posiĆ§Ć£o no mundo Ć©
calculado, este ponto Ć© definido de modo a ser relativamente invariante Ć
rotaĆ§Ć£o e mudanƧa de forma do objecto. Para fazer o seguimento de alvos
uma lista de objectos a seguir Ć© mantida, todos os objectos visĆveis sĆ£o
associados a objectos desta lista usando tƩcnicas de procura baseadas na
previsĆ£o do movimento dos objectos. Uma zona de procura de forma elĆptica Ć©
definida para cada objecto da lista sendo nesta zona que se darĆ” a
associaĆ§Ć£o. A previsĆ£o do movimento Ć© feita com base em dois modelos de
movimento, um de velocidade constante e um de aceleraĆ§Ć£o constante e com
aplicaĆ§Ć£o de filtros de Kalman. O algoritmo foi testado em diversas condiƧƵes
reais e mostrou-se robusto e eficaz no seguimento de pessoas mesmo em
situaƧƵes de extensa oclusĆ£o.
ABSTRACT: In this work a technique for the detection and tracking of multiple moving
targets in situations of strong occlusion using a laser rangefinder is presented.
The process starts by the application of temporal filters to the raw data in order
to remove noise followed by a multi phase segmentation with the goal of
overcoming occlusions. The resulting segments represent objects in the
environment. For each segment a representative point is defined; this point is
calculated to better represent the object while keeping some invariance to
rotation and shape changes. In order to perform the tracking, a list of objects to
follow is maintained; all visible objects are associated with objects from this list
using search techniques based on the predicted motion of objects. A search
zone shaped as an ellipse is defined for each object; it is in this zone that the
association is preformed. The motion prediction is based in two motion models,
one with constant velocity and the other with constant acceleration and in the
application of Kalman filters. The algorithm was tested in diverse real
conditions and shown to be robust and effective in the tracking of people even
in situations of long occlusions
Data Fusion of Laser Range Finder and Video Camera
For this project, a technique of fusing the data from sensors are developed in order to detect, track and classify in a static background environment. The proposed method is to utilize a single video camera and a laser range finder to determine the range of a generally specified targets or objects and classification of those particular targets. The module aims to acquire or detect objects or obstacles and provide the distance from the module to the target in real-time application using real live video. The proposed method to achieve the objective is using MATLAB to perform data fusion of the data collected from laser range finder and video camera. Background subtraction is used in this project to perform object detection
Spatial context-aware person-following for a domestic robot
Domestic robots are in the focus of research in
terms of service providers in households and even as robotic
companion that share the living space with humans. A major
capability of mobile domestic robots that is joint exploration
of space. One challenge to deal with this task is how could we
let the robots move in space in reasonable, socially acceptable
ways so that it will support interaction and communication
as a part of the joint exploration. As a step towards this
challenge, we have developed a context-aware following behav-
ior considering these social aspects and applied these together
with a multi-modal person-tracking method to switch between
three basic following approaches, namely direction-following,
path-following and parallel-following. These are derived from
the observation of human-human following schemes and are
activated depending on the current spatial context (e.g. free
space) and the relative position of the interacting human.
A combination of the elementary behaviors is performed in
real time with our mobile robot in different environments.
First experimental results are provided to demonstrate the
practicability of the proposed approach
- ā¦