49 research outputs found

    Consensus in the Presence of Multiple Opinion Leaders: Effect of Bounded Confidence

    Full text link
    The problem of analyzing the performance of networked agents exchanging evidence in a dynamic network has recently grown in importance. This problem has relevance in signal and data fusion network applications and in studying opinion and consensus dynamics in social networks. Due to its capability of handling a wider variety of uncertainties and ambiguities associated with evidence, we use the framework of Dempster-Shafer (DS) theory to capture the opinion of an agent. We then examine the consensus among agents in dynamic networks in which an agent can utilize either a cautious or receptive updating strategy. In particular, we examine the case of bounded confidence updating where an agent exchanges its opinion only with neighboring nodes possessing 'similar' evidence. In a fusion network, this captures the case in which nodes only update their state based on evidence consistent with the node's own evidence. In opinion dynamics, this captures the notions of Social Judgment Theory (SJT) in which agents update their opinions only with other agents possessing opinions closer to their own. Focusing on the two special DS theoretic cases where an agent state is modeled as a Dirichlet body of evidence and a probability mass function (p.m.f.), we utilize results from matrix theory, graph theory, and networks to prove the existence of consensus agent states in several time-varying network cases of interest. For example, we show the existence of a consensus in which a subset of network nodes achieves a consensus that is adopted by follower network nodes. Of particular interest is the case of multiple opinion leaders, where we show that the agents do not reach a consensus in general, but rather converge to 'opinion clusters'. Simulation results are provided to illustrate the main results.Comment: IEEE Transactions on Signal and Information Processing Over Networks, to appea

    Estimation of Frame Independent and Enhancement Components for Speech Communication over Packet Networks

    Get PDF
    In this paper, we describe a new approach to cope with packet loss in speech coders. The idea is to split the information present in each speech packet into two components, one to independently decode the given speech frame and one to enhance it by exploiting interframe dependencies. The scheme is based on sparse linear prediction and a redefinition of the analysis-by-synthesis process. We presentMean Opinion Scores for the presented coder with different degrees of packet loss and show that it performs similarly to frame dependent coders for low packet loss probability and similarly to frame independent coders for high packet loss probability. We also present ideas on how to make the coder work synergistically with the channel loss estimate

    Enhancing Sparsity in Linear Prediction of Speech by Iteratively Reweighted 1-norm Minimization

    Get PDF
    Linear prediction of speech based on 1-norm minimization has already proved to be an interesting alternative to 2-norm minimization. In particular, choosing the 1-norm as a convex relaxation of the 0-norm, the corresponding linear prediction model offers a sparser residual better suited for coding applications. In this paper, we propose a new speech modeling technique based on reweighted 1-norm minimization. The purpose of the reweighted scheme is to overcome the mismatch between 0-norm minimization and 1-norm minimization while keeping the problem solvable with convex estimation tools. Experimental results prove the effectiveness of the reweighted 1-norm minimization, offering better coding properties compared to 1-norm minimization

    Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    Get PDF
    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range of the shift operator associated with the particular prediction problem considered. The second method uses the alternative Cauchy bound to impose a convex constraint on the predictor in the 1-norm error minimization. These methods are compared with two existing methods: the Burg method, based on the 1-norm minimization of the forward and backward prediction error, and the iteratively reweighted 2-norm minimization known to converge to the 1-norm minimization with an appropriate selection of weights. The evaluation gives proof of the effectiveness of the new methods, performing as well as unconstrained 1-norm based linear prediction for modeling and coding of speech

    Analysis of Wyner-Ziv quantizers for packet loss

    No full text
    Distributed Source Coding (DSC) has gained wide popularity in applications such as video coding and distributed sensor networks. However, DSC has not been widely explored for use in low delay, low bit rate applications such as quantization of speech parameters. This is due to the difficulty of designing quantizers with imperfect side information resulting from decoding errors, quantization noise and packet losses. We address this fundamental problem by modeling the decoder as a three state system based on the packet losses/receptions and correct/incorrect decodings. Then we derive the expressions for the error variance of each of the states and solve them under stationary conditions. This enables one to model, analyze and design scalar uniform coset quantizers for imperfect side information. Simulation results verify the improved performance of the designed WZ quantizers over predictive and non-predictive quantization schemes in packet loss

    Quantization for classification accuracy in high-rate quantizers

    No full text
    Quantization of signals is required for many transmission, storage and compression applications. The original signal is quantized at the encoder side. At the decoder side, a replica of the original signal that should resemble the original signal in some sense is recovered. Present quantizers make an effort to reduce the distortion of the signal in the sense of reproduction fidelity. Consider scenarios in which signals are generated from multiple classes. The encoder focuses on the task of quantizing the data without any regards to the class of the signal. The quantized signal reaches the decoder where not only the recovery of the signal should take place but also a decision is to be made on the class of the signal based on the quantized version of the signal only. In this paper, we study the design of such scalar quantizer that is optimized for the task of classification at the decoder. We define the distortion to be the symmetric Kullback-Leibler (KL) divergence measure between the conditional probabilities of class given the signal before and after quantization. A high-rate analysis of the quantizer is presented and the optimum point density of the quantizer for minimizing the symmetric KL divergence is derived. The performance of this method on synthetically generated data is examined and observed to be superior in the task of classification of signals at the decoder

    Efficient sensor selection with application to time varying graphs

    No full text
    This paper addresses the problem of efficiently selecting sensors such that the mean squared estimation error is minimized under jointly Gaussian assumptions. First, we propose an O(n 3 ) algorithm that yields the same set of sensors as a previously published near mean squared error (MSE) optimal method that runs in O(n 4 ). Then we show that this approach can be extended to efficient sensor selection in a time varying graph. We consider a rank one modification to the graph Laplacian, which captures the cases where a new edge is added or deleted, or an edge weight is changed, for a fixed set of vertices. We show that we can efficiently update the new set of sensors in O(n 2 ) time for the best case by saving computations that were done for the original graph. Experiments demonstrate advantages in computational time and MSE accuracy in the proposed methods compared to recently developed graph sampling methods

    Inferring latent states in a network influenced by neighbor activities: An undirected generative approach

    No full text
    The problem of inferring the hidden state of individual nodes in social/sensor networks in which node activities affect their neighbors is growing in importance. We present an undirected generative model, a type of probabilistic model that has so far not been used for modeling latent variables influenced by neighbors in a network. We also propose an efficient inference method based on variational inference principles which, in contrast to sampling methods used in most existing models, is scalable to larger networks. While training is intractable in general, by using stochastic methods to approximate the intractable derivative, we show that our model can be trained using the maximum likelihood method by formulating the model as an exponential family distribution. The results demonstrate that the proposed undirected model can accurately infer latent states compared to baseline methods
    corecore