1,149 research outputs found
Decentralized Estimation over Orthogonal Multiple-access Fading Channels in Wireless Sensor Networks - Optimal and Suboptimal Estimators
Optimal and suboptimal decentralized estimators in wireless sensor networks
(WSNs) over orthogonal multiple-access fading channels are studied in this
paper. Considering multiple-bit quantization before digital transmission, we
develop maximum likelihood estimators (MLEs) with both known and unknown
channel state information (CSI). When training symbols are available, we derive
a MLE that is a special case of the MLE with unknown CSI. It implicitly uses
the training symbols to estimate the channel coefficients and exploits the
estimated CSI in an optimal way. To reduce the computational complexity, we
propose suboptimal estimators. These estimators exploit both signal and data
level redundant information to improve the estimation performance. The proposed
MLEs reduce to traditional fusion based or diversity based estimators when
communications or observations are perfect. By introducing a general message
function, the proposed estimators can be applied when various analog or digital
transmission schemes are used. The simulations show that the estimators using
digital communications with multiple-bit quantization outperform the estimator
using analog-and-forwarding transmission in fading channels. When considering
the total bandwidth and energy constraints, the MLE using multiple-bit
quantization is superior to that using binary quantization at medium and high
observation signal-to-noise ratio levels
Decentralized Random-Field Estimation for Sensor Networks Using Quantized Spatially Correlated Data and Fusion-Center Feedback
In large-scale wireless sensor networks, sensor-processor elements (nodes) are densely deployed to monitor the environment; consequently, their observations form a random field that is highly correlated in space. We consider a fusion sensor-network architecture where, due to the bandwidth and energy constraints, the nodes transmit quantized data to a fusion center. The fusion center provides feedback by broadcasting summary information to the nodes. In addition to saving energy, this feedback ensures reliability and robustness to node and fusion-center failures. We assume that the sensor observations follow a linear-regression model with known spatial covariances between any two locations within a region of interest. We propose a Bayesian framework for adaptive quantization, fusion-center feedback, and estimation of the random field and its parameters. We also derive a simple suboptimal scheme for estimating the unknown parameters, apply our estimation approach to the no-feedback scenario, discuss field prediction at arbitrary locations within the region of interest, and present numerical examples demonstrating the performance of the proposed methods
On Distributed Linear Estimation With Observation Model Uncertainties
We consider distributed estimation of a Gaussian source in a heterogenous
bandwidth constrained sensor network, where the source is corrupted by
independent multiplicative and additive observation noises, with incomplete
statistical knowledge of the multiplicative noise. For multi-bit quantizers, we
derive the closed-form mean-square-error (MSE) expression for the linear
minimum MSE (LMMSE) estimator at the FC. For both error-free and erroneous
communication channels, we propose several rate allocation methods named as
longest root to leaf path, greedy and integer relaxation to (i) minimize the
MSE given a network bandwidth constraint, and (ii) minimize the required
network bandwidth given a target MSE. We also derive the Bayesian Cramer-Rao
lower bound (CRLB) and compare the MSE performance of our proposed methods
against the CRLB. Our results corroborate that, for low power multiplicative
observation noises and adequate network bandwidth, the gaps between the MSE of
our proposed methods and the CRLB are negligible, while the performance of
other methods like individual rate allocation and uniform is not satisfactory
Monte Carlo optimization of decentralized estimation networks over directed acyclic graphs under communication constraints
Motivated by the vision of sensor networks, we consider decentralized estimation networks over bandwidth–limited communication links, and are particularly interested in the tradeoff between the estimation accuracy and the cost of communications due to, e.g., energy consumption. We employ a class of in–network processing strategies that admits directed acyclic graph representations and yields a tractable Bayesian risk that comprises the cost of communications and estimation error penalty. This perspective captures a broad range of possibilities for processing under network constraints and enables a rigorous design problem in the form of constrained optimization. A similar scheme and the structures exhibited by the solutions have been previously studied in the context of decentralized detection. Under reasonable assumptions, the optimization can be carried out in a message passing fashion. We adopt
this framework for estimation, however, the corresponding optimization scheme involves integral operators that cannot be evaluated exactly in general. We develop an approximation framework using Monte Carlo methods and obtain
particle representations and approximate computational schemes for both the in–network processing strategies and their optimization. The proposed Monte Carlo optimization procedure operates in a scalable and efficient fashion and,
owing to the non-parametric nature, can produce results for any distributions provided that samples can be produced from the marginals. In addition, this approach exhibits graceful degradation of the estimation accuracy asymptotically
as the communication becomes more costly, through a parameterized Bayesian risk
High dimensional inference: structured sparse models and non-linear measurement channels
Thesis (Ph.D.)--Boston UniversityHigh dimensional inference is motivated by many real life problems such as medical diagnosis, security, and marketing. In statistical inference problems, n data samples are collected where each sample contains p attributes. High dimensional inference deals with problems in which the number of parameters, p, is larger than the sample size, n.
To hope for any consistent result within high dimensional framework, data is assumed to lie on a low dimensional manifold. This implies that only k « p parameters are required to characterize p feature variables. One way to impose such a low dimensional structure is a regularization based approach. In this approach, statistical inference problem is mapped to an optimization problem in which a regularizer term penalizes the deviation of the model from a specific structure. The choice of appropriate penalizing functions is often challenging. We explore three major problems that arise in the context of this approach.
First, we probe the reconstruction problem under sparse Poisson models. We are motivated by applications in explosive identification, and online marketing where the observations are the counts of a recurring event. We study the amplitude effect which distinguishes our problem from a conventional linear regression least squares problem. Motivated by applications in decentralized sensor networks and distributed multi-task learning, we study the effect of decentralization on high dimensional inference. Finally, we provide a general framework to study the impact of multiple structured models on performance of regularization based reconstruction methods. For each of the afore- mentioned scenarios, we propose an equivalent optimization problem and specify the conditions under which the optimization problem can be solved. Moreover, we mathematically analyze the performance of such recovery method in terms of reconstruction error, prediction error, probability of successful recovery, and sample complexity
Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs
The paper considers gossip distributed estimation of a (static) distributed
random field (a.k.a., large scale unknown parameter vector) observed by
sparsely interconnected sensors, each of which only observes a small fraction
of the field. We consider linear distributed estimators whose structure
combines the information \emph{flow} among sensors (the \emph{consensus} term
resulting from the local gossiping exchange among sensors when they are able to
communicate) and the information \emph{gathering} measured by the sensors (the
\emph{sensing} or \emph{innovations} term.) This leads to mixed time scale
algorithms--one time scale associated with the consensus and the other with the
innovations. The paper establishes a distributed observability condition
(global observability plus mean connectedness) under which the distributed
estimates are consistent and asymptotically normal. We introduce the
distributed notion equivalent to the (centralized) Fisher information rate,
which is a bound on the mean square error reduction rate of any distributed
estimator; we show that under the appropriate modeling and structural network
communication conditions (gossip protocol) the distributed gossip estimator
attains this distributed Fisher information rate, asymptotically achieving the
performance of the optimal centralized estimator. Finally, we study the
behavior of the distributed gossip estimator when the measurements fade (noise
variance grows) with time; in particular, we consider the maximum rate at which
the noise variance can grow and still the distributed estimator being
consistent, by showing that, as long as the centralized estimator is
consistent, the distributed estimator remains consistent.Comment: Submitted for publication, 30 page
- …