26 research outputs found
Mathematical Theory of Atomic Norm Denoising In Blind Two-Dimensional Super-Resolution (Extended Version)
This paper develops a new mathematical framework for denoising in blind
two-dimensional (2D) super-resolution upon using the atomic norm. The framework
denoises a signal that consists of a weighted sum of an unknown number of
time-delayed and frequency-shifted unknown waveforms from its noisy
measurements. Moreover, the framework also provides an approach for estimating
the unknown parameters in the signal. We prove that when the number of the
observed samples satisfies certain lower bound that is a function of the system
parameters, we can estimate the noise-free signal, with very high accuracy,
upon solving a regularized least-squares atomic norm minimization problem. We
derive the theoretical mean-squared error of the estimator, and we show that it
depends on the noise variance, the number of unknown waveforms, the number of
samples, and the dimension of the low-dimensional space where the unknown
waveforms lie. Finally, we verify the theoretical findings of the paper by
using extensive simulation experiments.Comment: 19 page
Adaptive Interference Removal for Un-coordinated Radar/Communication Co-existence
Most existing approaches to co-existing communication/radar systems assume
that the radar and communication systems are coordinated, i.e., they share
information, such as relative position, transmitted waveforms and channel
state. In this paper, we consider an un-coordinated scenario where a
communication receiver is to operate in the presence of a number of radars, of
which only a sub-set may be active, which poses the problem of estimating the
active waveforms and the relevant parameters thereof, so as to cancel them
prior to demodulation. Two algorithms are proposed for such a joint waveform
estimation/data demodulation problem, both exploiting sparsity of a proper
representation of the interference and of the vector containing the errors of
the data block, so as to implement an iterative joint interference removal/data
demodulation process. The former algorithm is based on classical on-grid
compressed sensing (CS), while the latter forces an atomic norm (AN)
constraint: in both cases the radar parameters and the communication
demodulation errors can be estimated by solving a convex problem. We also
propose a way to improve the efficiency of the AN-based algorithm. The
performance of these algorithms are demonstrated through extensive simulations,
taking into account a variety of conditions concerning both the interferers and
the respective channel states
Multi-Antenna Dual-Blind Deconvolution for Joint Radar-Communications via SoMAN Minimization
Joint radar-communications (JRC) has emerged as a promising technology for
efficiently using the limited electromagnetic spectrum. In JRC applications
such as secure military receivers, often the radar and communications signals
are overlaid in the received signal. In these passive listening outposts, the
signals and channels of both radar and communications are unknown to the
receiver. The ill-posed problem of recovering all signal and channel parameters
from the overlaid signal is terms as dual-blind deconvolution (DBD). In this
work, we investigate a more challenging version of DBD with a multi-antenna
receiver. We model the radar and communications channels with a few (sparse)
continuous-valued parameters such as time delays, Doppler velocities, and
directions-of-arrival (DoAs). To solve this highly ill-posed DBD, we propose to
minimize the sum of multivariate atomic norms (SoMAN) that depends on the
unknown parameters. To this end, we devise an exact semidefinite program using
theories of positive hyperoctant trigonometric polynomials (PhTP). Our
theoretical analyses show that the minimum number of samples and antennas
required for perfect recovery is logarithmically dependent on the maximum of
the number of radar targets and communications paths rather than their sum. We
show that our approach is easily generalized to include several practical
issues such as gain/phase errors and additive noise. Numerical experiments show
the exact parameter recovery for different JRCComment: 40 pages, 6 figures. arXiv admin note: text overlap with
arXiv:2208.0438
Blind Two-Dimensional Super-Resolution and Its Performance Guarantee
In this work, we study the problem of identifying the parameters of a linear
system from its response to multiple unknown input waveforms. We assume that
the system response, which is the only given information, is a scaled
superposition of time-delayed and frequency-shifted versions of the unknown
waveforms. Such kind of problem is severely ill-posed and does not yield a
unique solution without introducing further constraints. To fully characterize
the linear system, we assume that the unknown waveforms lie in a common known
low-dimensional subspace that satisfies certain randomness and concentration
properties. Then, we develop a blind two-dimensional (2D) super-resolution
framework that applies to a large number of applications such as radar imaging,
image restoration, and indoor source localization. In this framework, we show
that under a minimum separation condition between the time-frequency shifts,
all the unknowns that characterize the linear system can be recovered precisely
and with very high probability provided that a lower bound on the total number
of the observed samples is satisfied. The proposed framework is based on 2D
atomic norm minimization problem which is shown to be reformulated and solved
efficiently via semidefinite programming. Simulation results that confirm the
theoretical findings of the paper are provided
Interference Removal for Radar/Communication Co-existence: the Random Scattering Case
In this paper we consider an un-cooperative spectrum sharing scenario,
wherein a radar system is to be overlaid to a pre-existing wireless
communication system. Given the order of magnitude of the transmitted powers in
play, we focus on the issue of interference mitigation at the communication
receiver. We explicitly account for the reverberation produced by the
(typically high-power) radar transmitter whose signal hits scattering centers
(whether targets or clutter) producing interference onto the communication
receiver, which is assumed to operate in an un-synchronized and un-coordinated
scenario. We first show that receiver design amounts to solving a non-convex
problem of joint interference removal and data demodulation: next, we introduce
two algorithms, both exploiting sparsity of a proper representation of the
interference and of the vector containing the errors of the data block. The
first algorithm is basically a relaxed constrained Atomic Norm minimization,
while the latter relies on a two-stage processing structure and is based on
alternating minimization. The merits of these algorithms are demonstrated
through extensive simulations: interestingly, the two-stage alternating
minimization algorithm turns out to achieve satisfactory performance with
moderate computational complexity
Learning to process with spikes and to localise pulses
In the last few decades, deep learning with artificial neural networks (ANNs) has emerged as one of the most widely used techniques in tasks such as classification and regression, achieving competitive results and in some cases even surpassing human-level performance. Nonetheless, as ANN architectures are optimised towards empirical results and departed from their biological precursors, how exactly human brains process information using these short electrical pulses called spikes remains a mystery. Hence, in this thesis, we explore the problem of learning to process with spikes and to localise pulses.
We first consider spiking neural networks (SNNs), a type of ANN that more closely mimic biological neural networks in that neurons communicate with one another using spikes. This unique architecture allows us to look into the role of heterogeneity in learning. Since it is conjectured that the information is encoded by the timing of spikes, we are particularly interested in the heterogeneity of time constants of neurons. We then trained SNNs for classification tasks on a range of visual and auditory neuromorphic datasets, which contain streams of events (spike times) instead of the conventional frame-based data, and show that the overall performance is improved by allowing the neurons to have different time constants, especially on tasks with richer temporal structure. We also find that the learned time constants are distributed similarly to those experimentally observed in some mammalian cells. Besides, we demonstrate that learning with heterogeneity improves robustness against hyperparameter mistuning. These results suggest that heterogeneity may be more than the byproduct of noisy processes and perhaps serves a key role in learning in changing environments, yet heterogeneity has been overlooked in basic artificial models.
While neuromorphic datasets, which are often captured by neuromorphic devices that closely model the corresponding biological systems, have enabled us to explore the more biologically plausible SNNs, there still exists a gap in understanding how spike times encode information in actual biological neural networks like human brains, as such data is difficult to acquire due to the trade-off between the timing precision and the number of cells simultaneously recorded electrically. Instead, what we usually obtain is the low-rate discrete samples of trains of filtered spikes. Hence, in the second part of the thesis, we focus on a different type of problem involving pulses, that is to retrieve the precise pulse locations from these low-rate samples. We make use of the finite rate of innovation (FRI) sampling theory, which states that perfect reconstruction is possible for classes of continuous non-bandlimited signals that have a small number of free parameters. However, existing FRI methods break down under very noisy conditions due to the so-called subspace swap event. Thus, we present two novel model-based learning architectures: Deep Unfolded Projected Wirtinger Gradient Descent (Deep Unfolded PWGD) and FRI Encoder-Decoder Network (FRIED-Net). The former is based on the existing iterative denoising algorithm for subspace-based methods, while the latter models directly the relationship between the samples and the locations of the pulses using an autoencoder-like network. Using a stream of K Diracs as an example, we show that both algorithms are able to overcome the breakdown inherent in the existing subspace-based methods. Moreover, we extend our FRIED-Net framework beyond conventional FRI methods by considering when the shape is unknown. We show that the pulse shape can be learned using backpropagation. This coincides with the application of spike detection from real-world calcium imaging data, where we achieve competitive results. Finally, we explore beyond canonical FRI signals and demonstrate that FRIED-Net is able to reconstruct streams of pulses with different shapes.Open Acces