2,393 research outputs found
Tracking Target Signal Strengths on a Grid using Sparsity
Multi-target tracking is mainly challenged by the nonlinearity present in the
measurement equation, and the difficulty in fast and accurate data association.
To overcome these challenges, the present paper introduces a grid-based model
in which the state captures target signal strengths on a known spatial grid
(TSSG). This model leads to \emph{linear} state and measurement equations,
which bypass data association and can afford state estimation via
sparsity-aware Kalman filtering (KF). Leveraging the grid-induced sparsity of
the novel model, two types of sparsity-cognizant TSSG-KF trackers are
developed: one effects sparsity through -norm regularization, and the
other invokes sparsity as an extra measurement. Iterative extended KF and
Gauss-Newton algorithms are developed for reduced-complexity tracking, along
with accurate error covariance updates for assessing performance of the
resultant sparsity-aware state estimators. Based on TSSG state estimates, more
informative target position and track estimates can be obtained in a follow-up
step, ensuring that track association and position estimation errors do not
propagate back into TSSG state estimates. The novel TSSG trackers do not
require knowing the number of targets or their signal strengths, and exhibit
considerably lower complexity than the benchmark hidden Markov model filter,
especially for a large number of targets. Numerical simulations demonstrate
that sparsity-cognizant trackers enjoy improved root mean-square error
performance at reduced complexity when compared to their sparsity-agnostic
counterparts.Comment: Submitted to IEEE Trans. on Signal Processin
Position and Orientation Estimation through Millimeter Wave MIMO in 5G Systems
Millimeter wave signals and large antenna arrays are considered enabling
technologies for future 5G networks. While their benefits for achieving
high-data rate communications are well-known, their potential advantages for
accurate positioning are largely undiscovered. We derive the Cram\'{e}r-Rao
bound (CRB) on position and rotation angle estimation uncertainty from
millimeter wave signals from a single transmitter, in the presence of
scatterers. We also present a novel two-stage algorithm for position and
rotation angle estimation that attains the CRB for average to high
signal-to-noise ratio. The algorithm is based on multiple measurement vectors
matching pursuit for coarse estimation, followed by a refinement stage based on
the space-alternating generalized expectation maximization algorithm. We find
that accurate position and rotation angle estimation is possible using signals
from a single transmitter, in either line-of- sight, non-line-of-sight, or
obstructed-line-of-sight conditions.Comment: The manuscript has been revised, and increased from 27 to 31 pages.
Also, Fig.2, Fig. 10 and Table I are adde
Dynamic Underwater Glider Network for Environmental Field Estimation
A coordinated dynamic sensor network of autonomous underwater gliders to estimate three-dimensional time-varying environmental fields is proposed and tested. Integration with a network of surface relay nodes and asynchronous consensus are used to distribute local information and achieve the global field estimate. Field spatial sparsity is considered, and field samples are acquired by compressive sensing devices. Tests on simulated and real data demonstrate the feasibility of the approach with relative error performance within 10
Inverse Modeling for MEG/EEG data
We provide an overview of the state-of-the-art for mathematical methods that
are used to reconstruct brain activity from neurophysiological data. After a
brief introduction on the mathematics of the forward problem, we discuss
standard and recently proposed regularization methods, as well as Monte Carlo
techniques for Bayesian inference. We classify the inverse methods based on the
underlying source model, and discuss advantages and disadvantages. Finally we
describe an application to the pre-surgical evaluation of epileptic patients.Comment: 15 pages, 1 figur
Mitigation of Through-Wall Distortions of Frontal Radar Images using Denoising Autoencoders
Radar images of humans and other concealed objects are considerably distorted
by attenuation, refraction and multipath clutter in indoor through-wall
environments. While several methods have been proposed for removing target
independent static and dynamic clutter, there still remain considerable
challenges in mitigating target dependent clutter especially when the knowledge
of the exact propagation characteristics or analytical framework is
unavailable. In this work we focus on mitigating wall effects using a machine
learning based solution -- denoising autoencoders -- that does not require
prior information of the wall parameters or room geometry. Instead, the method
relies on the availability of a large volume of training radar images gathered
in through-wall conditions and the corresponding clean images captured in
line-of-sight conditions. During the training phase, the autoencoder learns how
to denoise the corrupted through-wall images in order to resemble the free
space images. We have validated the performance of the proposed solution for
both static and dynamic human subjects. The frontal radar images of static
targets are obtained by processing wideband planar array measurement data with
two-dimensional array and range processing. The frontal radar images of dynamic
targets are simulated using narrowband planar array data processed with
two-dimensional array and Doppler processing. In both simulation and
measurement processes, we incorporate considerable diversity in the target and
propagation conditions. Our experimental results, from both simulation and
measurement data, show that the denoised images are considerably more similar
to the free-space images when compared to the original through-wall images
Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells
Adherent cells exert traction forces on to their environment, which allows
them to migrate, to maintain tissue integrity, and to form complex
multicellular structures. This traction can be measured in a perturbation-free
manner with traction force microscopy (TFM). In TFM, traction is usually
calculated via the solution of a linear system, which is complicated by
undersampled input data, acquisition noise, and large condition numbers for
some methods. Therefore, standard TFM algorithms either employ data filtering
or regularization. However, these approaches require a manual selection of
filter- or regularization parameters and consequently exhibit a substantial
degree of subjectiveness. This shortcoming is particularly serious when cells
in different conditions are to be compared because optimal noise suppression
needs to be adapted for every situation, which invariably results in systematic
errors. Here, we systematically test the performance of new methods from
computer vision and Bayesian inference for solving the inverse problem in TFM.
We compare two classical schemes, L1- and L2-regularization, with three
previously untested schemes, namely Elastic Net regularization, Proximal
Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that
Elastic Net regularization, which combines L1 and L2 regularization,
outperforms all other methods with regard to accuracy of traction
reconstruction. Next, we develop two methods, Bayesian L2 regularization and
Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization.
Using artificial data and experimental data, we show that these methods enable
robust reconstruction of traction without requiring a difficult selection of
regularization parameters specifically for each data set. Thus, Bayesian
methods can mitigate the considerable uncertainty inherent in comparing
cellular traction forces
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …