2,374 research outputs found
Deep Detection of People and their Mobility Aids for a Hospital Robot
Robots operating in populated environments encounter many different types of
people, some of whom might have an advanced need for cautious interaction,
because of physical impairments or their advanced age. Robots therefore need to
recognize such advanced demands to provide appropriate assistance, guidance or
other forms of support. In this paper, we propose a depth-based perception
pipeline that estimates the position and velocity of people in the environment
and categorizes them according to the mobility aids they use: pedestrian,
person in wheelchair, person in a wheelchair with a person pushing them, person
with crutches and person using a walker. We present a fast region proposal
method that feeds a Region-based Convolutional Network (Fast R-CNN). With this,
we speed up the object detection process by a factor of seven compared to a
dense sliding window approach. We furthermore propose a probabilistic position,
velocity and class estimator to smooth the CNN's detections and account for
occlusions and misclassifications. In addition, we introduce a new hospital
dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm
that our pipeline successfully keeps track of people and their mobility aids,
even in challenging situations with multiple people from different categories
and frequent occlusions. Videos of our experiments and the dataset are
available at http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAidsComment: 7 pages, ECMR 2017, dataset and videos:
http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAids
Combining LiDAR Space Clustering and Convolutional Neural Networks for Pedestrian Detection
Pedestrian detection is an important component for safety of autonomous
vehicles, as well as for traffic and street surveillance. There are extensive
benchmarks on this topic and it has been shown to be a challenging problem when
applied on real use-case scenarios. In purely image-based pedestrian detection
approaches, the state-of-the-art results have been achieved with convolutional
neural networks (CNN) and surprisingly few detection frameworks have been built
upon multi-cue approaches. In this work, we develop a new pedestrian detector
for autonomous vehicles that exploits LiDAR data, in addition to visual
information. In the proposed approach, LiDAR data is utilized to generate
region proposals by processing the three dimensional point cloud that it
provides. These candidate regions are then further processed by a
state-of-the-art CNN classifier that we have fine-tuned for pedestrian
detection. We have extensively evaluated the proposed detection process on the
KITTI dataset. The experimental results show that the proposed LiDAR space
clustering approach provides a very efficient way of generating region
proposals leading to higher recall rates and fewer misses for pedestrian
detection. This indicates that LiDAR data can provide auxiliary information for
CNN-based approaches
Real-time visual perception : detection and localisation of static and moving objects from a moving stereo rig
International audienceWe present a novel method for scene reconstruction and moving object detection and tracking, using extensive point tracking (typically more than 4000 points per frame) over time. Current neighbourhood is reconstructed in the form of a 3D point cloud, which allows for extra features (ground detection, path planning, obstacle detection). Reconstruction framework takes moving objects into account, and tracking over time allows for trajectory and speed estimation
Spatiotemporal Stacked Sequential Learning for Pedestrian Detection
Pedestrian classifiers decide which image windows contain a pedestrian. In
practice, such classifiers provide a relatively high response at neighbor
windows overlapping a pedestrian, while the responses around potential false
positives are expected to be lower. An analogous reasoning applies for image
sequences. If there is a pedestrian located within a frame, the same pedestrian
is expected to appear close to the same location in neighbor frames. Therefore,
such a location has chances of receiving high classification scores during
several frames, while false positives are expected to be more spurious. In this
paper we propose to exploit such correlations for improving the accuracy of
base pedestrian classifiers. In particular, we propose to use two-stage
classifiers which not only rely on the image descriptors required by the base
classifiers but also on the response of such base classifiers in a given
spatiotemporal neighborhood. More specifically, we train pedestrian classifiers
using a stacked sequential learning (SSL) paradigm. We use a new pedestrian
dataset we have acquired from a car to evaluate our proposal at different frame
rates. We also test on a well known dataset: Caltech. The obtained results show
that our SSL proposal boosts detection accuracy significantly with a minimal
impact on the computational cost. Interestingly, SSL improves more the accuracy
at the most dangerous situations, i.e. when a pedestrian is close to the
camera.Comment: 8 pages, 5 figure, 1 tabl
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Taking a Deeper Look at Pedestrians
In this paper we study the use of convolutional neural networks (convnets)
for the task of pedestrian detection. Despite their recent diverse successes,
convnets historically underperform compared to other pedestrian detectors. We
deliberately omit explicitly modelling the problem into the network (e.g. parts
or occlusion modelling) and show that we can reach competitive performance
without bells and whistles. In a wide range of experiments we analyse small and
big convnets, their architectural choices, parameters, and the influence of
different training data, including pre-training on surrogate tasks.
We present the best convnet detectors on the Caltech and KITTI dataset. On
Caltech our convnets reach top performance both for the Caltech1x and
Caltech10x training setup. Using additional data at training time our strongest
convnet model is competitive even to detectors that use additional data
(optical flow) at test time
Fast traffic sign recognition using color segmentation and deep convolutional networks
The use of Computer Vision techniques for the automatic
recognition of road signs is fundamental for the development of intelli-
gent vehicles and advanced driver assistance systems. In this paper, we
describe a procedure based on color segmentation, Histogram of Ori-
ented Gradients (HOG), and Convolutional Neural Networks (CNN) for
detecting and classifying road signs. Detection is speeded up by a pre-
processing step to reduce the search space, while classication is carried
out by using a Deep Learning technique. A quantitative evaluation of the
proposed approach has been conducted on the well-known German Traf-
c Sign data set and on the novel Data set of Italian Trac Signs (DITS),
which is publicly available and contains challenging sequences captured
in adverse weather conditions and in an urban scenario at night-time.
Experimental results demonstrate the eectiveness of the proposed ap-
proach in terms of both classication accuracy and computational speed
- …