34,493 research outputs found
Reduced-Dimension Linear Transform Coding of Correlated Signals in Networks
A model, called the linear transform network (LTN), is proposed to analyze
the compression and estimation of correlated signals transmitted over directed
acyclic graphs (DAGs). An LTN is a DAG network with multiple source and
receiver nodes. Source nodes transmit subspace projections of random correlated
signals by applying reduced-dimension linear transforms. The subspace
projections are linearly processed by multiple relays and routed to intended
receivers. Each receiver applies a linear estimator to approximate a subset of
the sources with minimum mean squared error (MSE) distortion. The model is
extended to include noisy networks with power constraints on transmitters. A
key task is to compute all local compression matrices and linear estimators in
the network to minimize end-to-end distortion. The non-convex problem is solved
iteratively within an optimization framework using constrained quadratic
programs (QPs). The proposed algorithm recovers as special cases the regular
and distributed Karhunen-Loeve transforms (KLTs). Cut-set lower bounds on the
distortion region of multi-source, multi-receiver networks are given for linear
coding based on convex relaxations. Cut-set lower bounds are also given for any
coding strategy based on information theory. The distortion region and
compression-estimation tradeoffs are illustrated for different communication
demands (e.g. multiple unicast), and graph structures.Comment: 33 pages, 7 figures, To appear in IEEE Transactions on Signal
Processin
Power-Constrained Sparse Gaussian Linear Dimensionality Reduction over Noisy Channels
In this paper, we investigate power-constrained sensing matrix design in a
sparse Gaussian linear dimensionality reduction framework. Our study is carried
out in a single--terminal setup as well as in a multi--terminal setup
consisting of orthogonal or coherent multiple access channels (MAC). We adopt
the mean square error (MSE) performance criterion for sparse source
reconstruction in a system where source-to-sensor channel(s) and
sensor-to-decoder communication channel(s) are noisy. Our proposed sensing
matrix design procedure relies upon minimizing a lower-bound on the MSE in
single-- and multiple--terminal setups. We propose a three-stage sensing matrix
optimization scheme that combines semi-definite relaxation (SDR) programming, a
low-rank approximation problem and power-rescaling. Under certain conditions,
we derive closed-form solutions to the proposed optimization procedure. Through
numerical experiments, by applying practical sparse reconstruction algorithms,
we show the superiority of the proposed scheme by comparing it with other
relevant methods. This performance improvement is achieved at the price of
higher computational complexity. Hence, in order to address the complexity
burden, we present an equivalent stochastic optimization method to the problem
of interest that can be solved approximately, while still providing a superior
performance over the popular methods.Comment: Accepted for publication in IEEE Transactions on Signal Processing
(16 pages
Random Projections For Large-Scale Regression
Fitting linear regression models can be computationally very expensive in
large-scale data analysis tasks if the sample size and the number of variables
are very large. Random projections are extensively used as a dimension
reduction tool in machine learning and statistics. We discuss the applications
of random projections in linear regression problems, developed to decrease
computational costs, and give an overview of the theoretical guarantees of the
generalization error. It can be shown that the combination of random
projections with least squares regression leads to similar recovery as ridge
regression and principal component regression. We also discuss possible
improvements when averaging over multiple random projections, an approach that
lends itself easily to parallel implementation.Comment: 13 pages, 3 Figure
Optimized Compressed Sensing Matrix Design for Noisy Communication Channels
We investigate a power-constrained sensing matrix design problem for a
compressed sensing framework. We adopt a mean square error (MSE) performance
criterion for sparse source reconstruction in a system where the
source-to-sensor channel and the sensor-to-decoder communication channel are
noisy. Our proposed sensing matrix design procedure relies upon minimizing a
lower-bound on the MSE. Under certain conditions, we derive closed-form
solutions to the optimization problem. Through numerical experiments, by
applying practical sparse reconstruction algorithms, we show the strength of
the proposed scheme by comparing it with other relevant methods. We discuss the
computational complexity of our design method, and develop an equivalent
stochastic optimization method to the problem of interest that can be solved
approximately with a significantly less computational burden. We illustrate
that the low-complexity method still outperforms the popular competing methods.Comment: Submitted to IEEE ICC 2015 (EXTENDED VERSION
Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks
Recent advances in electronics are enabling substantial processing to be
performed at each node (robots, sensors) of a networked system. Local
processing enables data compression and may mitigate measurement noise, but it
is still slower compared to a central computer (it entails a larger
computational delay). However, while nodes can process the data in parallel,
the centralized computational is sequential in nature. On the other hand, if a
node sends raw data to a central computer for processing, it incurs
communication delay. This leads to a fundamental communication-computation
trade-off, where each node has to decide on the optimal amount of preprocessing
in order to maximize the network performance. We consider a network in charge
of estimating the state of a dynamical system and provide three contributions.
First, we provide a rigorous problem formulation for optimal real-time
estimation in processing networks in the presence of delays. Second, we show
that, in the case of a homogeneous network (where all sensors have the same
computation) that monitors a continuous-time scalar linear system, the optimal
amount of local preprocessing maximizing the network estimation performance can
be computed analytically. Third, we consider the realistic case of a
heterogeneous network monitoring a discrete-time multi-variate linear system
and provide algorithms to decide on suitable preprocessing at each node, and to
select a sensor subset when computational constraints make using all sensors
suboptimal. Numerical simulations show that selecting the sensors is crucial.
Moreover, we show that if the nodes apply the preprocessing policy suggested by
our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio
Optimal Channel Training in Uplink Network MIMO Systems
We consider a multi-cell frequency-selective fading uplink channel (network
MIMO) from K single-antenna user terminals (UTs) to B cooperative base stations
(BSs) with M antennas each. The BSs, assumed to be oblivious of the applied
codebooks, forward compressed versions of their observations to a central
station (CS) via capacity limited backhaul links. The CS jointly decodes the
messages from all UTs. Since the BSs and the CS are assumed to have no prior
channel state information (CSI), the channel needs to be estimated during its
coherence time. Based on a lower bound of the ergodic mutual information, we
determine the optimal fraction of the coherence time used for channel training,
taking different path losses between the UTs and the BSs into account. We then
study how the optimal training length is impacted by the backhaul capacity.
Although our analytical results are based on a large system limit, we show by
simulations that they provide very accurate approximations for even small
system dimensions.Comment: 15 pages, 7 figures. To appear in the IEEE Transactions on Signal
Processin
Boosting Fronthaul Capacity: Global Optimization of Power Sharing for Centralized Radio Access Network
The limited fronthaul capacity imposes a challenge on the uplink of
centralized radio access network (C-RAN). We propose to boost the fronthaul
capacity of massive multiple-input multiple-output (MIMO) aided C-RAN by
globally optimizing the power sharing between channel estimation and data
transmission both for the user devices (UDs) and the remote radio units (RRUs).
Intuitively, allocating more power to the channel estimation will result in
more accurate channel estimates, which increases the achievable throughput.
However, increasing the power allocated to the pilot training will reduce the
power assigned to data transmission, which reduces the achievable throughput.
In order to optimize the powers allocated to the pilot training and to the data
transmission of both the UDs and the RRUs, we assign an individual power
sharing factor to each of them and derive an asymptotic closed-form expression
of the signal-to-interference-plus-noise for the massive MIMO aided C-RAN
consisting of both the UD-to-RRU links and the RRU-to-baseband unit (BBU)
links. We then exploit the C-RAN architecture's central computing and control
capability for jointly optimizing the UDs' power sharing factors and the RRUs'
power sharing factors aiming for maximizing the fronthaul capacity. Our
simulation results show that the fronthaul capacity is significantly boosted by
the proposed global optimization of the power allocation between channel
estimation and data transmission both for the UDs and for their host RRUs. As a
specific example of 32 receive antennas (RAs) deployed by RRU and 128 RAs
deployed by BBU, the sum-rate of 10 UDs achieved with the optimal power sharing
factors improves 33\% compared with the one attained without optimizing power
sharing factors
- …