20,164 research outputs found
Person localization using sensor information fusion
Nowadays the incredible grow of mobile devices market led
to the need for location-aware applications. However, sometimes person
location is di cult to obtain, since most of these devices only have a GPS
(Global Positioning System) chip to retrieve location. In order to sup-
press this limitation and to provide location everywhere (even where a
structured environment doesn't exist) a wearable inertial navigation sys-
tem is proposed, which is a convenient way to track people in situations
where other localization systems fail. The system combines pedestrian
dead reckoning with GPS, using widely available, low-cost and low-power
hardware components. The system innovation is the information fusion
and the use of probabilistic methods to learn persons gait behavior to
correct, in real-time, the drift errors given by the sensors.This work is part-funded by ERDF - European Regional Development Fund through
the COMPETE Programme (operational programme for competitiveness) and by
National Funds through the FCT Fundao para a Cincia e a Tecnologia (Portuguese
Foundation for Science and Technology) within project FCOMP-01-0124-FEDER-
028980 (PTDC/EEI-SII/1386/2012). Ricardo also acknowledge FCT for the support
of his work through the PhD grant (SFRH/DB/70248/2010)
People tracking by cooperative fusion of RADAR and camera sensors
Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations
Cooperative Relative Positioning of Mobile Users by Fusing IMU Inertial and UWB Ranging Information
Relative positioning between multiple mobile users is essential for many
applications, such as search and rescue in disaster areas or human social
interaction. Inertial-measurement unit (IMU) is promising to determine the
change of position over short periods of time, but it is very sensitive to
error accumulation over long term run. By equipping the mobile users with
ranging unit, e.g. ultra-wideband (UWB), it is possible to achieve accurate
relative positioning by trilateration-based approaches. As compared to vision
or laser-based sensors, the UWB does not need to be with in line-of-sight and
provides accurate distance estimation. However, UWB does not provide any
bearing information and the communication range is limited, thus UWB alone
cannot determine the user location without any ambiguity. In this paper, we
propose an approach to combine IMU inertial and UWB ranging measurement for
relative positioning between multiple mobile users without the knowledge of the
infrastructure. We incorporate the UWB and the IMU measurement into a
probabilistic-based framework, which allows to cooperatively position a group
of mobile users and recover from positioning failures. We have conducted
extensive experiments to demonstrate the benefits of incorporating IMU inertial
and UWB ranging measurements.Comment: accepted by ICRA 201
RGB-D datasets using microsoft kinect or similar sensors: a survey
RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms
Multispectral Palmprint Encoding and Recognition
Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/code
- …