6,565 research outputs found
Resource-Constrained Adaptive Search and Tracking for Sparse Dynamic Targets
This paper considers the problem of resource-constrained and noise-limited
localization and estimation of dynamic targets that are sparsely distributed
over a large area. We generalize an existing framework [Bashan et al, 2008] for
adaptive allocation of sensing resources to the dynamic case, accounting for
time-varying target behavior such as transitions to neighboring cells and
varying amplitudes over a potentially long time horizon. The proposed adaptive
sensing policy is driven by minimization of a modified version of the
previously introduced ARAP objective function, which is a surrogate function
for mean squared error within locations containing targets. We provide
theoretical upper bounds on the performance of adaptive sensing policies by
analyzing solutions with oracle knowledge of target locations, gaining insight
into the effect of target motion and amplitude variation as well as sparsity.
Exact minimization of the multi-stage objective function is infeasible, but
myopic optimization yields a closed-form solution. We propose a simple
non-myopic extension, the Dynamic Adaptive Resource Allocation Policy (D-ARAP),
that allocates a fraction of resources for exploring all locations rather than
solely exploiting the current belief state. Our numerical studies indicate that
D-ARAP has the following advantages: (a) it is more robust than the myopic
policy to noise, missing data, and model mismatch; (b) it performs comparably
to well-known approximate dynamic programming solutions but at significantly
lower computational complexity; and (c) it improves greatly upon non-adaptive
uniform resource allocation in terms of estimation error and probability of
detection.Comment: 49 pages, 1 table, 11 figure
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …