33,022 research outputs found
Correlation Filters with Limited Boundaries
Correlation filters take advantage of specific properties in the Fourier
domain allowing them to be estimated efficiently: O(NDlogD) in the frequency
domain, versus O(D^3 + ND^2) spatially where D is signal length, and N is the
number of signals. Recent extensions to correlation filters, such as MOSSE,
have reignited interest of their use in the vision community due to their
robustness and attractive computational properties. In this paper we
demonstrate, however, that this computational efficiency comes at a cost.
Specifically, we demonstrate that only 1/D proportion of shifted examples are
unaffected by boundary effects which has a dramatic effect on
detection/tracking performance. In this paper, we propose a novel approach to
correlation filter estimation that: (i) takes advantage of inherent
computational redundancies in the frequency domain, and (ii) dramatically
reduces boundary effects. Impressive object tracking and detection results are
presented in terms of both accuracy and computational efficiency.Comment: 8 pages, 6 figures, 2 table
Multi-Bernoulli Sensor-Control via Minimization of Expected Estimation Errors
This paper presents a sensor-control method for choosing the best next state
of the sensor(s), that provide(s) accurate estimation results in a multi-target
tracking application. The proposed solution is formulated for a multi-Bernoulli
filter and works via minimization of a new estimation error-based cost
function. Simulation results demonstrate that the proposed method can
outperform the state-of-the-art methods in terms of computation time and
robustness to clutter while delivering similar accuracy
Edge and Line Feature Extraction Based on Covariance Models
age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image
People tracking by cooperative fusion of RADAR and camera sensors
Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations
- …