12 research outputs found
Deep Detection of People and their Mobility Aids for a Hospital Robot
Robots operating in populated environments encounter many different types of
people, some of whom might have an advanced need for cautious interaction,
because of physical impairments or their advanced age. Robots therefore need to
recognize such advanced demands to provide appropriate assistance, guidance or
other forms of support. In this paper, we propose a depth-based perception
pipeline that estimates the position and velocity of people in the environment
and categorizes them according to the mobility aids they use: pedestrian,
person in wheelchair, person in a wheelchair with a person pushing them, person
with crutches and person using a walker. We present a fast region proposal
method that feeds a Region-based Convolutional Network (Fast R-CNN). With this,
we speed up the object detection process by a factor of seven compared to a
dense sliding window approach. We furthermore propose a probabilistic position,
velocity and class estimator to smooth the CNN's detections and account for
occlusions and misclassifications. In addition, we introduce a new hospital
dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm
that our pipeline successfully keeps track of people and their mobility aids,
even in challenging situations with multiple people from different categories
and frequent occlusions. Videos of our experiments and the dataset are
available at http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAidsComment: 7 pages, ECMR 2017, dataset and videos:
http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAids
LoRa-Based System for Tracking Runners in Cross Country Races
[EN] In recent years, there is an important trend in the organization of cross country races and popular races where hundred people usually participate. In these events, runners usually subject the body to extreme situations that can lead to various types of indisposition and they can also suffer falls. Currently, the electronic systems used in this type of racing refer only to whether a runner has passed through a checkpoint. However, it is necessary to implement systems that allow controlling the population of runners knowing their status all the time. For this reason, this paper proposes the design of a low-cost system for monitoring and controlling runners in this type of event. The system is formed by a network architecture in infrastructure mode based on Low-Power Wide-Area Network (LPWAN) technology. Each runner will carry an electronic device that will give their position and vital signs to be monitored. Likewise, it will incorporate an S.O.S. button that will allow sending a warning to the organization in order to help the person. All these data will be sent through the network to a database that will allow the organization and the public attending the race to check where the runner is and the history of their vital signs. This paper shows the proposed design to our system. Therefore, the paper will show the different practical experiments we have been carried out with the devices that have allowed proposing this design.This work has been partially supported by the Ministerio de Ciencia, Innovación y Universidades
through the Ayudas para la adquisición de equipamiento cientÃfico-técnico, Subprograma estatal de
infraestructuras de investigación y equipamiento cientÃfico-técnico (plan Estatal I+D+i 2017-2020) (project
EQC2018-004988-P).Sendra, S.; Romero-DÃaz, P.; GarcÃa-Navas, JL.; Lloret, J. (2019). LoRa-Based System for Tracking Runners in Cross Country Races. MDPI. 1-6. https://doi.org/10.3390/ecsa-6-066291
Towards a Principled Integration of Multi-Camera Re-Identification and Tracking through Optimal Bayes Filters
With the rise of end-to-end learning through deep learning, person detectors
and re-identification (ReID) models have recently become very strong.
Multi-camera multi-target (MCMT) tracking has not fully gone through this
transformation yet. We intend to take another step in this direction by
presenting a theoretically principled way of integrating ReID with tracking
formulated as an optimal Bayes filter. This conveniently side-steps the need
for data-association and opens up a direct path from full images to the core of
the tracker. While the results are still sub-par, we believe that this new,
tight integration opens many interesting research opportunities and leads the
way towards full end-to-end tracking from raw pixels.Comment: First two authors have equal contribution. This is initial work into
a new direction, not a benchmark-beating method. v2 only adds
acknowledgements and fixes a typo in e-mai
A multi-modal person perception framework for socially interactive mobile service robots
In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments
CONFIDENCE-AWARE PEDESTRIAN TRACKING USING A STEREO CAMERA
Pedestrian tracking is a significant problem in autonomous driving. The majority of studies carries out tracking in the image domain, which is not sufficient for many realistic applications like path planning, collision avoidance, and autonomous navigation. In this study, we address pedestrian tracking using stereo images and tracking-by-detection. Our framework comes in three primary phases: (1) people are detected in image space by the mask R-CNN detector and their positions in 3D-space are computed using stereo information; (2) corresponding detections are assigned to each other across consecutive frames based on visual characteristics and 3D geometry; and (3) the current positions of pedestrians are corrected using their previous states using an extended Kalman filter. We use our tracking-to-confirm-detection method, in which detections are treated differently depending on their confidence metrics. To obtain a high recall value while keeping a low number of false positives. While existing methods consider all target trajectories have equal accuracy, we estimate a confidence value for each trajectory at every epoch. Thus, depending on their confidence values, the targets can have different contributions to the whole tracking system. The performance of our approach is evaluated using the Kitti benchmark dataset. It shows promising results comparable to those of other state-of-the-art methods
Multi-sensor multi-person tracking on a mobile robot platform
Service robots need to be aware of persons in their vicinity in order to interact with them. People tracking enables the robot to perceive persons by fusing the information of several sensors. Most robots rely on laser range scanners and RGB cameras for this task. The thesis focuses on the detection and tracking of heads. This allows the robot to establish eye contact, which makes interactions feel more natural.
Developing a fast and reliable pose-invariant head detector is challenging. The head detector that is proposed in this thesis works well on frontal heads, but is not fully pose-invariant. This thesis further explores adaptive tracking to keep track of heads that do not face the robot. Finally, head detector and adaptive tracker are combined within a new people tracking framework and experiments show its effectiveness compared to a state-of-the-art system