25,942 research outputs found
Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing
The random demodulator is a recent compressive sensing architecture providing
efficient sub-Nyquist sampling of sparse band-limited signals. The compressive
sensing paradigm requires an accurate model of the analog front-end to enable
correct signal reconstruction in the digital domain. In practice, hardware
devices such as filters deviate from their desired design behavior due to
component variations. Existing reconstruction algorithms are sensitive to such
deviations, which fall into the more general category of measurement matrix
perturbations. This paper proposes a model-based technique that aims to
calibrate filter model mismatches to facilitate improved signal reconstruction
quality. The mismatch is considered to be an additive error in the discretized
impulse response. We identify the error by sampling a known calibrating signal,
enabling least-squares estimation of the impulse response error. The error
estimate and the known system model are used to calibrate the measurement
matrix. Numerical analysis demonstrates the effectiveness of the calibration
method even for highly deviating low-pass filter responses. The proposed method
performance is also compared to a state of the art method based on discrete
Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
Identification of Parametric Underspread Linear Systems and Super-Resolution Radar
Identification of time-varying linear systems, which introduce both
time-shifts (delays) and frequency-shifts (Doppler-shifts), is a central task
in many engineering applications. This paper studies the problem of
identification of underspread linear systems (ULSs), whose responses lie within
a unit-area region in the delay Doppler space, by probing them with a known
input signal. It is shown that sufficiently-underspread parametric linear
systems, described by a finite set of delays and Doppler-shifts, are
identifiable from a single observation as long as the time bandwidth product of
the input signal is proportional to the square of the total number of delay
Doppler pairs in the system. In addition, an algorithm is developed that
enables identification of parametric ULSs from an input train of pulses in
polynomial time by exploiting recent results on sub-Nyquist sampling for time
delay estimation and classical results on recovery of frequencies from a sum of
complex exponentials. Finally, application of these results to super-resolution
target detection using radar is discussed. Specifically, it is shown that the
proposed procedure allows to distinguish between multiple targets with very
close proximity in the delay Doppler space, resulting in a resolution that
substantially exceeds that of standard matched-filtering based techniques
without introducing leakage effects inherent in recently proposed compressed
sensing-based radar methods.Comment: Revised version of a journal paper submitted to IEEE Trans. Signal
Processing: 30 pages, 17 figure
MIMO Radar Waveform Optimization With Prior Information of the Extended Target and Clutter
The concept of multiple-input multiple-output (MIMO) radar allows each transmitting antenna element to transmit an arbitrary waveform. This provides extra degrees of freedom compared to the traditional transmit beamforming approach. It has been shown in the recent literature that MIMO radar systems have many advantages. In this paper, we consider the joint optimization of waveforms and receiving filters in the MIMO radar for the case of extended target in clutter. A novel iterative algorithm is proposed to optimize the waveforms and receiving filters such that the detection performance can be maximized. The corresponding iterative algorithms are also developed for the case where only the statistics or the uncertainty set of the target impulse response is available. These algorithms guarantee that the SINR performance improves in each iteration step. Numerical results show that the proposed methods have better SINR performance than existing design methods
Dynamic Decomposition of Spatiotemporal Neural Signals
Neural signals are characterized by rich temporal and spatiotemporal dynamics
that reflect the organization of cortical networks. Theoretical research has
shown how neural networks can operate at different dynamic ranges that
correspond to specific types of information processing. Here we present a data
analysis framework that uses a linearized model of these dynamic states in
order to decompose the measured neural signal into a series of components that
capture both rhythmic and non-rhythmic neural activity. The method is based on
stochastic differential equations and Gaussian process regression. Through
computer simulations and analysis of magnetoencephalographic data, we
demonstrate the efficacy of the method in identifying meaningful modulations of
oscillatory signals corrupted by structured temporal and spatiotemporal noise.
These results suggest that the method is particularly suitable for the analysis
and interpretation of complex temporal and spatiotemporal neural signals
Data driven optimal filtering for phase and frequency of noisy oscillations: application to vortex flowmetering
A new method for extracting the phase of oscillations from noisy time series
is proposed. To obtain the phase, the signal is filtered in such a way that the
filter output has minimal relative variation in the amplitude (MIRVA) over all
filters with complex-valued impulse response. The argument of the filter output
yields the phase. Implementation of the algorithm and interpretation of the
result are discussed. We argue that the phase obtained by the proposed method
has a low susceptibility to measurement noise and a low rate of artificial
phase slips. The method is applied for the detection and classification of mode
locking in vortex flowmeters. A novel measure for the strength of mode locking
is proposed.Comment: 12 pages, 10 figure
Outlier robust system identification: a Bayesian kernel-based approach
In this paper, we propose an outlier-robust regularized kernel-based method
for linear system identification. The unknown impulse response is modeled as a
zero-mean Gaussian process whose covariance (kernel) is given by the recently
proposed stable spline kernel, which encodes information on regularity and
exponential stability. To build robustness to outliers, we model the
measurement noise as realizations of independent Laplacian random variables.
The identification problem is cast in a Bayesian framework, and solved by a new
Markov Chain Monte Carlo (MCMC) scheme. In particular, exploiting the
representation of the Laplacian random variables as scale mixtures of
Gaussians, we design a Gibbs sampler which quickly converges to the target
distribution. Numerical simulations show a substantial improvement in the
accuracy of the estimates over state-of-the-art kernel-based methods.Comment: 5 figure
- âŠ