497 research outputs found
Successive Wyner-Ziv Coding Scheme and its Application to the Quadratic Gaussian CEO Problem
We introduce a distributed source coding scheme called successive Wyner-Ziv
coding. We show that any point in the rate region of the quadratic Gaussian CEO
problem can be achieved via the successive Wyner-Ziv coding. The concept of
successive refinement in the single source coding is generalized to the
distributed source coding scenario, which we refer to as distributed successive
refinement. For the quadratic Gaussian CEO problem, we establish a necessary
and sufficient condition for distributed successive refinement, where the
successive Wyner-Ziv coding scheme plays an important role.Comment: 28 pages, submitted to the IEEE Transactions on Information Theor
Uplink CoMP under a Constrained Backhaul and Imperfect Channel Knowledge
Coordinated Multi-Point (CoMP) is known to be a key technology for next
generation mobile communications systems, as it allows to overcome the burden
of inter-cell interference. Especially in the uplink, it is likely that
interference exploitation schemes will be used in the near future, as they can
be used with legacy terminals and require no or little changes in
standardization. Major drawbacks, however, are the extent of additional
backhaul infrastructure needed, and the sensitivity to imperfect channel
knowledge. This paper jointly addresses both issues in a new framework
incorporating a multitude of proposed theoretical uplink CoMP concepts, which
are then put into perspective with practical CoMP algorithms. This
comprehensive analysis provides new insight into the potential usage of uplink
CoMP in next generation wireless communications systems.Comment: Submitted to IEEE Transactions on Wireless Communications in February
201
On Distributed Linear Estimation With Observation Model Uncertainties
We consider distributed estimation of a Gaussian source in a heterogenous
bandwidth constrained sensor network, where the source is corrupted by
independent multiplicative and additive observation noises, with incomplete
statistical knowledge of the multiplicative noise. For multi-bit quantizers, we
derive the closed-form mean-square-error (MSE) expression for the linear
minimum MSE (LMMSE) estimator at the FC. For both error-free and erroneous
communication channels, we propose several rate allocation methods named as
longest root to leaf path, greedy and integer relaxation to (i) minimize the
MSE given a network bandwidth constraint, and (ii) minimize the required
network bandwidth given a target MSE. We also derive the Bayesian Cramer-Rao
lower bound (CRLB) and compare the MSE performance of our proposed methods
against the CRLB. Our results corroborate that, for low power multiplicative
observation noises and adequate network bandwidth, the gaps between the MSE of
our proposed methods and the CRLB are negligible, while the performance of
other methods like individual rate allocation and uniform is not satisfactory
Reliable Inference from Unreliable Agents
Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference.
In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions.
Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack.
The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents and the sum rate tend to infinity.
An intermediate regime of performance between the exponential behavior in discrete CEO problems and the
behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete).
Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets.
In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored
Source Coding in Networks with Covariance Distortion Constraints
We consider a source coding problem with a network scenario in mind, and
formulate it as a remote vector Gaussian Wyner-Ziv problem under covariance
matrix distortions. We define a notion of minimum for two positive-definite
matrices based on which we derive an explicit formula for the rate-distortion
function (RDF). We then study the special cases and applications of this
result. We show that two well-studied source coding problems, i.e. remote
vector Gaussian Wyner-Ziv problems with mean-squared error and mutual
information constraints are in fact special cases of our results. Finally, we
apply our results to a joint source coding and denoising problem. We consider a
network with a centralized topology and a given weighted sum-rate constraint,
where the received signals at the center are to be fused to maximize the output
SNR while enforcing no linear distortion. We show that one can design the
distortion matrices at the nodes in order to maximize the output SNR at the
fusion center. We thereby bridge between denoising and source coding within
this setup
Successive structuring of source coding algorithms for data fusion, buffering, and distribution in networks
Supervised by Gregory W. Wornell.Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 159-165).(cont.) We also explore the interactions between source coding and queue management in problems of buffering and distributing distortion-tolerant data. We formulate a general queuing model relevant to numerous communication scenarios, and develop a bound on the performance of any algorithm. We design an adaptive buffer-control algorithm for use in dynamic environments and under finite memory limitations; its performance closely approximates the bound. Our design uses multiresolution source codes that exploit the data's distortion-tolerance in minimizing end-to-end distortion. Compared to traditional approaches, the performance gains of the adaptive algorithm are significant - improving distortion, delay, and overall system robustness.by Stark Christiaan Draper
- …