368 research outputs found
Phase-Quantized Block Noncoherent Communication
Analog-to-digital conversion (ADC) is a key bottleneck in scaling DSP-centric
receiver architectures to multiGigabit/s speeds. Recent information-theoretic
results, obtained under ideal channel conditions (perfect synchronization, no
dispersion), indicate that low-precision ADC (1-4 bits) could be a suitable
choice for designing such high speed systems. In this work, we study the impact
of employing low-precision ADC in a {\it carrier asynchronous} system.
Specifically, we consider transmission over the block noncoherent Additive
White Gaussian Noise (AWGN) channel, and investigate the achievable performance
under low-precision output quantization. We focus attention on an architecture
in which the receiver quantizes {\it only the phase} of the received signal:
this has the advantage of being implementable without automatic gain control,
using multiple 1-bit ADCs preceded by analog multipliers. For standard uniform
Phase Shift Keying (PSK) modulation, we study the structure of the transition
density of the resulting phase-quantized block noncoherent channel. Several
results, based on the symmetry inherent in the channel model, are provided to
characterize this transition density. Low-complexity procedures for computing
the channel capacity, and for block demodulation, are obtained using these
results. Numerical computations are performed to compare the performance of
quantized and unquantized systems, for different quantization precisions, and
different block lengths. It is observed, for example, that with QPSK
modulation, 8-bin phase quantization of the received signal recovers about
80-85% of the capacity attained with unquantized observations, while 12-bin
phase quantization recovers more than 90% of the unquantized capacity.
Dithering the constellation is shown to improve the performance in the face of
drastic quantization
Interference management and capacity analysis for mm-wave picocells in urban canyons
Millimeter (mm) wave picocellular networks are a promising approach for
delivering the 1000-fold capacity increase required to keep up with projected
demand for wireless data: the available bandwidth is orders of magnitude larger
than that in existing cellular systems, and the small carrier wavelength
enables the realization of highly directive antenna arrays in compact form
factor, thus drastically increasing spatial reuse. In this paper, we carry out
an interference analysis for mm-wave picocells in an urban canyon with a dense
deployment of base stations. Each base station sector can serve multiple
simultaneous users, which implies that both intra- and inter-cell interference
must be managed. We propose a \textit{cross-layer} approach to interference
management based on (i) suppressing interference at the physical layer and (ii)
managing the residual interference at the medium access control layer. We
provide an estimate of network capacity and establish that 1000-fold increase
relative to conventional LTE cellular networks is indeed feasible
Compressive spectral embedding: sidestepping the SVD
Spectral embedding based on the Singular Value Decomposition (SVD) is a
widely used "preprocessing" step in many learning tasks, typically leading to
dimensionality reduction by projecting onto a number of dominant singular
vectors and rescaling the coordinate axes (by a predefined function of the
singular value). However, the number of such vectors required to capture
problem structure grows with problem size, and even partial SVD computation
becomes a bottleneck. In this paper, we propose a low-complexity it compressive
spectral embedding algorithm, which employs random projections and finite order
polynomial expansions to compute approximations to SVD-based embedding. For an
m times n matrix with T non-zeros, its time complexity is O((T+m+n)log(m+n)),
and the embedding dimension is O(log(m+n)), both of which are independent of
the number of singular vectors whose effect we wish to capture. To the best of
our knowledge, this is the first work to circumvent this dependence on the
number of singular vectors for general SVD-based embeddings. The key to
sidestepping the SVD is the observation that, for downstream inference tasks
such as clustering and classification, we are only interested in using the
resulting embedding to evaluate pairwise similarity metrics derived from the
euclidean norm, rather than capturing the effect of the underlying matrix on
arbitrary vectors as a partial SVD tries to do. Our numerical results on
network datasets demonstrate the efficacy of the proposed method, and motivate
further exploration of its application to large-scale inference tasks.Comment: NIPS 201
Cooperative localization using angle of arrival measurements: sequential algorithms and non-line-of-sight suppression
We investigate localization of a source based on angle of arrival (AoA)
measurements made at a geographically dispersed network of cooperating
receivers. The goal is to efficiently compute accurate estimates despite
outliers in the AoA measurements due to multipath reflections in
non-line-of-sight (NLOS) environments. Maximal likelihood (ML) location
estimation in such a setting requires exhaustive testing of estimates from all
possible subsets of "good" measurements, which has exponential complexity in
the number of measurements. We provide a randomized algorithm that approaches
ML performance with linear complexity in the number of measurements. The
building block for this algorithm is a low-complexity sequential algorithm for
updating the source location estimates under line-of-sight (LOS) environments.
Our Bayesian framework can exploit the ability to resolve multiple paths in
wideband systems to provide significant performance gains over narrowband
systems in NLOS environments, and easily extends to accommodate additional
information such as range measurements and prior information about location.Comment: 31 pages, 11 figures, related to MELT'08 Workshop proceedin
Noncoherent compressive channel estimation for mm-wave massive MIMO
Millimeter (mm) wave massive MIMO has the potential for delivering orders of
magnitude increases in mobile data rates, with compact antenna arrays providing
narrow steerable beams for unprecedented levels of spatial reuse. A fundamental
technical bottleneck, however, is rapid spatial channel estimation and beam
adaptation in the face of mobility and blockage. Recently proposed compressive
techniques which exploit the sparsity of mm wave channels are a promising
approach to this problem, with overhead scaling linearly with the number of
dominant paths and logarithmically with the number of array elements. Further,
they can be implemented with RF beamforming with low-precision phase control.
However, these methods make implicit assumptions on long-term phase coherence
that are not satisfied by existing hardware. In this paper, we propose and
evaluate a noncoherent compressive channel estimation technique which can
estimate a sparse spatial channel based on received signal strength (RSS)
alone, and is compatible with off-the-shelf hardware. The approach is based on
cascading phase retrieval (i.e., recovery of complex-valued measurements from
RSS measurements, up to a scalar multiple) with coherent compressive
estimation. While a conventional cascade scheme would multiply two measurement
matrices to obtain an overall matrix whose entries are in a continuum, a key
novelty in our scheme is that we constrain the overall measurement matrix to be
implementable using coarsely quantized pseudorandom phases, employing a virtual
decomposition of the matrix into a product of measurement matrices for phase
retrieval and compressive estimation. Theoretical and simulation results show
that our noncoherent method scales almost as well with array size as its
coherent counterpart, thus inheriting the scalability and low overhead of the
latter
Robust Wireless Fingerprinting via Complex-Valued Neural Networks
A "wireless fingerprint" which exploits hardware imperfections unique to each
device is a potentially powerful tool for wireless security. Such a fingerprint
should be able to distinguish between devices sending the same message, and
should be robust against standard spoofing techniques. Since the information in
wireless signals resides in complex baseband, in this paper, we explore the use
of neural networks with complex-valued weights to learn fingerprints using
supervised learning. We demonstrate that, while there are potential benefits to
using sections of the signal beyond just the preamble to learn fingerprints,
the network cheats when it can, using information such as transmitter ID (which
can be easily spoofed) to artificially inflate performance. We also show that
noise augmentation by inserting additional white Gaussian noise can lead to
significant performance gains, which indicates that this counter-intuitive
strategy helps in learning more robust fingerprints. We provide results for two
different wireless protocols, WiFi and ADS-B, demonstrating the effectiveness
of the proposed method.Comment: Accepted at IEEE Global Communications Conference (Globecom) 201
Newtonized Orthogonal Matching Pursuit: Frequency Estimation over the Continuum
We propose a fast sequential algorithm for the fundamental problem of
estimating frequencies and amplitudes of a noisy mixture of sinusoids. The
algorithm is a natural generalization of Orthogonal Matching Pursuit (OMP) to
the continuum using Newton refinements, and hence is termed Newtonized OMP
(NOMP). Each iteration consists of two phases: detection of a new sinusoid, and
sequential Newton refinements of the parameters of already detected sinusoids.
The refinements play a critical role in two ways: (1) sidestepping the
potential basis mismatch from discretizing a continuous parameter space, (2)
providing feedback for locally refining parameters estimated in previous
iterations. We characterize convergence, and provide a Constant False Alarm
Rate (CFAR) based termination criterion. By benchmarking against the Cramer Rao
Bound, we show that NOMP achieves near-optimal performance under a variety of
conditions. We compare the performance of NOMP with classical algorithms such
as MUSIC and more recent Atomic norm Soft Thresholding (AST) and Lasso
algorithms, both in terms of frequency estimation accuracy and run time.Comment: Submitted to IEEE Transactions on Signal Processing (TSP
Recommended from our members
On the information in spike timing: Neural codes derived from polychronous groups
There is growing evidence regarding the importance of spike timing in neural information processing, with even a small number of spikes carrying information, but computational models lag significantly behind those for rate coding. Experimental evidence on neuronal behavior is consistent with the dynamical and state dependent behavior provided by recurrent connections. This motivates the minimalistic abstraction investigated in this paper, aimed at providing insight into information encoding in spike timing via recurrent connections. We employ information-theoretic techniques for a simple reservoir model which encodes input spatiotemporal patterns into a sparse neural code, translating the polychronous groups introduced by Izhike-vich into codewords on which we can perform standard vector operations. We show that the distance properties of the code are similar to those for (optimal) random codes. In particular, the code meets benchmarks associated with both linear classification and capacity, with the latter scaling exponentially with reservoir size
Joint Routing and Resource Allocation for Millimeter Wave Picocellular Backhaul
Picocellular architectures are essential for providing the spatial reuse
required to satisfy the ever-increasing demand for mobile data. A key
deployment challenge is to provide backhaul connections with sufficiently high
data rate. Providing wired support (e.g., using optical fiber) to pico base
stations deployed opportunistically on lampposts and rooftops is impractical,
hence wireless backhaul becomes an attractive approach. A multihop mesh network
comprised of directional millimeter wave links is considered here for this
purpose. The backhaul design problem is formulated as one of joint routing and
resource allocation, accounting for mutual interference across simultaneously
active links. A computationally tractable formulation is developed by
leveraging the localized nature of interference and the provable existence of a
sparse optimal allocation. Numerical results are provided for millimeter (mm)
wave mesh networks, which are well suited for scaling backhaul data rates due
to abundance of spectrum, and the ability to form highly directional,
electronically steerable beams
Combating Adversarial Attacks Using Sparse Representations
It is by now well-known that small adversarial perturbations can induce
classification errors in deep neural networks (DNNs). In this paper, we make
the case that sparse representations of the input data are a crucial tool for
combating such attacks. For linear classifiers, we show that a sparsifying
front end is provably effective against -bounded attacks,
reducing output distortion due to the attack by a factor of roughly
where is the data dimension and is the sparsity level. We then extend
this concept to DNNs, showing that a "locally linear" model can be used to
develop a theoretical foundation for crafting attacks and defenses.
Experimental results for the MNIST dataset show the efficacy of the proposed
sparsifying front end.Comment: Accepted at ICLR Workshop 201
- β¦