7,471 research outputs found

    Recovering the state sequence of hidden Markov models using mean-field approximations

    Full text link
    Inferring the sequence of states from observations is one of the most fundamental problems in Hidden Markov Models. In statistical physics language, this problem is equivalent to computing the marginals of a one-dimensional model with a random external field. While this task can be accomplished through transfer matrix methods, it becomes quickly intractable when the underlying state space is large. This paper develops several low-complexity approximate algorithms to address this inference problem when the state space becomes large. The new algorithms are based on various mean-field approximations of the transfer matrix. Their performances are studied in detail on a simple realistic model for DNA pyrosequencing.Comment: 43 pages, 41 figure

    An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation

    Get PDF
    In this work we design a receiver that iteratively passes soft information between the channel estimation and data decoding stages. The receiver incorporates sparsity-based parametric channel estimation. State-of-the-art sparsity-based iterative receivers simplify the channel estimation problem by restricting the multipath delays to a grid. Our receiver does not impose such a restriction. As a result it does not suffer from the leakage effect, which destroys sparsity. Communication at near capacity rates in high SNR requires a large modulation order. Due to the close proximity of modulation symbols in such systems, the grid-based approximation is of insufficient accuracy. We show numerically that a state-of-the-art iterative receiver with grid-based sparse channel estimation exhibits a bit-error-rate floor in the high SNR regime. On the contrary, our receiver performs very close to the perfect channel state information bound for all SNR values. We also demonstrate both theoretically and numerically that parametric channel estimation works well in dense channels, i.e., when the number of multipath components is large and each individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin

    A Message Passing Approach for Decision Fusion in Adversarial Multi-Sensor Networks

    Full text link
    We consider a simple, yet widely studied, set-up in which a Fusion Center (FC) is asked to make a binary decision about a sequence of system states by relying on the possibly corrupted decisions provided by byzantine nodes, i.e. nodes which deliberately alter the result of the local decision to induce an error at the fusion center. When independent states are considered, the optimum fusion rule over a batch of observations has already been derived, however its complexity prevents its use in conjunction with large observation windows. In this paper, we propose a near-optimal algorithm based on message passing that greatly reduces the computational burden of the optimum fusion rule. In addition, the proposed algorithm retains very good performance also in the case of dependent system states. By first focusing on the case of small observation windows, we use numerical simulations to show that the proposed scheme introduces a negligible increase of the decision error probability compared to the optimum fusion rule. We then analyse the performance of the new scheme when the FC make its decision by relying on long observation windows. We do so by considering both the case of independent and Markovian system states and show that the obtained performance are superior to those obtained with prior suboptimal schemes. As an additional result, we confirm the previous finding that, in some cases, it is preferable for the byzantine nodes to minimise the mutual information between the sequence system states and the reports submitted to the FC, rather than always flipping the local decision

    Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity

    Full text link
    Given an nn-length input signal \mbf{x}, it is well known that its Discrete Fourier Transform (DFT), \mbf{X}, can be computed in O(nlogn)O(n \log n) complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is exactly kk-sparse (where k<<nk<<n), can we do better? We show that asymptotically in kk and nn, when kk is sub-linear in nn (precisely, knδk \propto n^{\delta} where 0<δ<10 < \delta <1), and the support of the non-zero DFT coefficients is uniformly random, we can exploit this sparsity in two fundamental ways (i) {\bf {sample complexity}}: we need only M=rkM=rk deterministically chosen samples of the input signal \mbf{x} (where r<4r < 4 when 0<δ<0.990 < \delta < 0.99); and (ii) {\bf {computational complexity}}: we can reliably compute the DFT \mbf{X} using O(klogk)O(k \log k) operations, where the constants in the big Oh are small and are related to the constants involved in computing a small number of DFTs of length approximately equal to the sparsity parameter kk. Our algorithm succeeds with high probability, with the probability of failure vanishing to zero asymptotically in the number of samples acquired, MM.Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke
    corecore