3,088 research outputs found

    Bayesian Hypothesis Testing for Block Sparse Signal Recovery

    Full text link
    This letter presents a novel Block Bayesian Hypothesis Testing Algorithm (Block-BHTA) for reconstructing block sparse signals with unknown block structures. The Block-BHTA comprises the detection and recovery of the supports, and the estimation of the amplitudes of the block sparse signal. The support detection and recovery is performed using a Bayesian hypothesis testing. Then, based on the detected and reconstructed supports, the nonzero amplitudes are estimated by linear MMSE. The effectiveness of Block-BHTA is demonstrated by numerical experiments.Comment: 5 pages, 2 figures. arXiv admin note: text overlap with arXiv:1412.231

    Bayesian compressive sensing framework for spectrum reconstruction in Rayleigh fading channels

    Get PDF
    Compressive sensing (CS) is a novel digital signal processing technique that has found great interest in many applications including communication theory and wireless communications. In wireless communications, CS is particularly suitable for its application in the area of spectrum sensing for cognitive radios, where the complete spectrum under observation, with many spectral holes, can be modeled as a sparse wide-band signal in the frequency domain. Considering the initial works performed to exploit the benefits of Bayesian CS in spectrum sensing, the fading characteristic of wireless communications has not been considered yet to a great extent, although it is an inherent feature for all sorts of wireless communications and it must be considered for the design of any practically viable wireless system. In this paper, we extend the Bayesian CS framework for the recovery of a sparse signal, whose nonzero coefficients follow a Rayleigh distribution. It is then demonstrated via simulations that mean square error significantly improves when appropriate prior distribution is used for the faded signal coefficients and thus, in turns, the spectrum reconstruction improves. Different parameters of the system model, e.g., sparsity level and number of measurements, are then varied to show the consistency of the results for different cases

    Maximum-a-posteriori estimation with Bayesian confidence regions

    Full text link
    Solutions to inverse problems that are ill-conditioned or ill-posed may have significant intrinsic uncertainty. Unfortunately, analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems. As a result, while most modern mathematical imaging methods produce impressive point estimation results, they are generally unable to quantify the uncertainty in the solutions delivered. This paper presents a new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional. The approximations are derived by using recent concentration of measure results related to information theory for log-concave random vectors. A remarkable property of the approximations is that they can be computed very efficiently, even in large-scale problems, by using standard convex optimisation techniques. In particular, they are available as a by-product in problems solved by maximum-a-posteriori estimation. The approximations also have favourable theoretical properties, namely they outer-bound the true high-posterior-density credibility regions, and they are stable with respect to model dimension. The proposed methodology is illustrated on two high-dimensional imaging inverse problems related to tomographic reconstruction and sparse deconvolution, where the approximations are used to perform Bayesian hypothesis tests and explore the uncertainty about the solutions, and where proximal Markov chain Monte Carlo algorithms are used as benchmark to compute exact credible regions and measure the approximation error

    A distributed compressive sensing technique for data gathering in Wireless Sensor Networks

    Get PDF
    Compressive sensing is a new technique utilized for energy efficient data gathering in wireless sensor networks. It is characterized by its simple encoding and complex decoding. The strength of compressive sensing is its ability to reconstruct sparse or compressible signals from small number of measurements without requiring any a priori knowledge about the signal structure. Considering the fact that wireless sensor nodes are often deployed densely, the correlation among them can be utilized for further compression. By utilizing this spatial correlation, we propose a joint sparsity-based compressive sensing technique in this paper. Our approach employs Bayesian inference to build probabilistic model of the signals and thereafter applies belief propagation algorithm as a decoding method to recover the common sparse signal. The simulation results show significant gain in terms of signal reconstruction accuracy and energy consumption of our approach compared with existing approaches

    Computational Methods for Sparse Solution of Linear Inverse Problems

    Get PDF
    The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications

    Discrete and Continuous-time Soft-Thresholding with Dynamic Inputs

    Full text link
    There exist many well-established techniques to recover sparse signals from compressed measurements with known performance guarantees in the static case. However, only a few methods have been proposed to tackle the recovery of time-varying signals, and even fewer benefit from a theoretical analysis. In this paper, we study the capacity of the Iterative Soft-Thresholding Algorithm (ISTA) and its continuous-time analogue the Locally Competitive Algorithm (LCA) to perform this tracking in real time. ISTA is a well-known digital solver for static sparse recovery, whose iteration is a first-order discretization of the LCA differential equation. Our analysis shows that the outputs of both algorithms can track a time-varying signal while compressed measurements are streaming, even when no convergence criterion is imposed at each time step. The L2-distance between the target signal and the outputs of both discrete- and continuous-time solvers is shown to decay to a bound that is essentially optimal. Our analyses is supported by simulations on both synthetic and real data.Comment: 18 pages, 7 figures, journa
    • …
    corecore