27,641 research outputs found
On using gait to enhance frontal face extraction
Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario
Acoustical Ranging Techniques in Embedded Wireless Sensor Networked Devices
Location sensing provides endless opportunities for a wide range of applications in GPS-obstructed environments;
where, typically, there is a need for higher degree of accuracy. In this article, we focus on robust range
estimation, an important prerequisite for fine-grained localization. Motivated by the promise of acoustic in
delivering high ranging accuracy, we present the design, implementation and evaluation of acoustic (both
ultrasound and audible) ranging systems.We distill the limitations of acoustic ranging; and present efficient
signal designs and detection algorithms to overcome the challenges of coverage, range, accuracy/resolution,
tolerance to Doppler’s effect, and audible intensity. We evaluate our proposed techniques experimentally on
TWEET, a low-power platform purpose-built for acoustic ranging applications. Our experiments demonstrate
an operational range of 20 m (outdoor) and an average accuracy 2 cm in the ultrasound domain. Finally,
we present the design of an audible-range acoustic tracking service that encompasses the benefits of a near-inaudible
acoustic broadband chirp and approximately two times increase in Doppler tolerance to achieve better performance
Robust automatic target tracking based on a Bayesian ego-motion compensation framework for airborne FLIR imagery
Automatic target tracking in airborne FLIR imagery is currently a challenge due to the camera ego-motion. This phenomenon distorts the spatio-temporal correlation of the video sequence, which dramatically reduces the tracking performance. Several works address this problem using ego-motion compensation strategies. They use a deterministic approach to compensate the camera motion assuming a specific model of geometric transformation. However, in real sequences a specific geometric transformation can not accurately describe the camera ego-motion for the whole sequence, and as consequence of this, the performance of the tracking stage can significantly decrease, even completely fail. The optimum transformation for each pair of consecutive frames depends on the relative depth of the elements that compose the scene, and their degree of texturization. In this work, a novel Particle Filter framework is proposed to efficiently manage several hypothesis of geometric transformations: Euclidean, affine, and projective. Each type of transformation is used to compute candidate locations of the object in the current frame. Then, each candidate is evaluated by the measurement model of the Particle Filter using the appearance information. This approach is able to adapt to different camera ego-motion conditions, and thus to satisfactorily perform the tracking. The proposed strategy has been tested on the AMCOM FLIR dataset, showing a high efficiency in the tracking of different types of targets in real working conditions
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Extended Object Tracking: Introduction, Overview and Applications
This article provides an elaborate overview of current research in extended
object tracking. We provide a clear definition of the extended object tracking
problem and discuss its delimitation to other types of object tracking. Next,
different aspects of extended object modelling are extensively discussed.
Subsequently, we give a tutorial introduction to two basic and well used
extended object tracking approaches - the random matrix approach and the Kalman
filter-based approach for star-convex shapes. The next part treats the tracking
of multiple extended objects and elaborates how the large number of feasible
association hypotheses can be tackled using both Random Finite Set (RFS) and
Non-RFS multi-object trackers. The article concludes with a summary of current
applications, where four example applications involving camera, X-band radar,
light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are
highlighted.Comment: 30 pages, 19 figure
- …