13,725 research outputs found

    Simultaneous Orthogonal Matching Pursuit With Noise Stabilization: Theoretical Analysis

    Full text link
    This paper studies the joint support recovery of similar sparse vectors on the basis of a limited number of noisy linear measurements, i.e., in a multiple measurement vector (MMV) model. The additive noise signals on each measurement vector are assumed to be Gaussian and to exhibit different variances. The simultaneous orthogonal matching pursuit (SOMP) algorithm is generalized to weight the impact of each measurement vector on the choice of the atoms to be picked according to their noise levels. The new algorithm is referred to as SOMP-NS where NS stands for noise stabilization. To begin with, a theoretical framework to analyze the performance of the proposed algorithm is developed. This framework is then used to build conservative lower bounds on the probability of partial or full joint support recovery. Numerical simulations show that the proposed algorithm outperforms SOMP and that the theoretical lower bound provides a great insight into how SOMP-NS behaves when the weighting strategy is modified

    Compressed Anomaly Detection with Multiple Mixed Observations

    Full text link
    We consider a collection of independent random variables that are identically distributed, except for a small subset which follows a different, anomalous distribution. We study the problem of detecting which random variables in the collection are governed by the anomalous distribution. Recent work proposes to solve this problem by conducting hypothesis tests based on mixed observations (e.g. linear combinations) of the random variables. Recognizing the connection between taking mixed observations and compressed sensing, we view the problem as recovering the "support" (index set) of the anomalous random variables from multiple measurement vectors (MMVs). Many algorithms have been developed for recovering jointly sparse signals and their support from MMVs. We establish the theoretical and empirical effectiveness of these algorithms at detecting anomalies. We also extend the LASSO algorithm to an MMV version for our purpose. Further, we perform experiments on synthetic data, consisting of samples from the random variables, to explore the trade-off between the number of mixed observations per sample and the number of samples required to detect anomalies.Comment: 27 pages, 9 figures. Incorporates reviewer feedback, additional experiments, and additional figure

    Learning linear structural equation models in polynomial time and sample complexity

    Full text link
    The problem of learning structural equation models (SEMs) from data is a fundamental problem in causal inference. We develop a new algorithm --- which is computationally and statistically efficient and works in the high-dimensional regime --- for learning linear SEMs from purely observational data with arbitrary noise distribution. We consider three aspects of the problem: identifiability, computational efficiency, and statistical efficiency. We show that when data is generated from a linear SEM over pp nodes and maximum degree dd, our algorithm recovers the directed acyclic graph (DAG) structure of the SEM under an identifiability condition that is more general than those considered in the literature, and without faithfulness assumptions. In the population setting, our algorithm recovers the DAG structure in O(p(d2+logp))\mathcal{O}(p(d^2 + \log p)) operations. In the finite sample setting, if the estimated precision matrix is sparse, our algorithm has a smoothed complexity of O~(p3+pd7)\widetilde{\mathcal{O}}(p^3 + pd^7), while if the estimated precision matrix is dense, our algorithm has a smoothed complexity of O~(p5)\widetilde{\mathcal{O}}(p^5). For sub-Gaussian noise, we show that our algorithm has a sample complexity of O(d8ε2log(pδ))\mathcal{O}(\frac{d^8}{\varepsilon^2} \log (\frac{p}{\sqrt{\delta}})) to achieve ε\varepsilon element-wise additive error with respect to the true autoregression matrix with probability at most 1δ1 - \delta, while for noise with bounded (4m)(4m)-th moment, with mm being a positive integer, our algorithm has a sample complexity of O(d8ε2(p2δ)1/m)\mathcal{O}(\frac{d^8}{\varepsilon^2} (\frac{p^2}{\delta})^{1/m})

    Dynamic Filtering of Time-Varying Sparse Signals via l1 Minimization

    Full text link
    Despite the importance of sparsity signal models and the increasing prevalence of high-dimensional streaming data, there are relatively few algorithms for dynamic filtering of time-varying sparse signals. Of the existing algorithms, fewer still provide strong performance guarantees. This paper examines two algorithms for dynamic filtering of sparse signals that are based on efficient l1 optimization methods. We first present an analysis for one simple algorithm (BPDN-DF) that works well when the system dynamics are known exactly. We then introduce a novel second algorithm (RWL1-DF) that is more computationally complex than BPDN-DF but performs better in practice, especially in the case where the system dynamics model is inaccurate. Robustness to model inaccuracy is achieved by using a hierarchical probabilistic data model and propagating higher-order statistics from the previous estimate (akin to Kalman filtering) in the sparse inference process. We demonstrate the properties of these algorithms on both simulated data as well as natural video sequences. Taken together, the algorithms presented in this paper represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.Comment: 26 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1208.032

    Theoretical Bounds on MAP Estimation in Distributed Sensing Networks

    Full text link
    The typical approach for recovery of spatially correlated signals is regularized least squares with a coupled regularization term. In the Bayesian framework, this algorithm is seen as a maximum-a-posterior estimator whose postulated prior is proportional to the regularization term. In this paper, we study distributed sensing networks in which a set of spatially correlated signals are measured individually at separate terminals, but recovered jointly via a generic maximum-a-posterior estimator. Using the replica method, it is shown that the setting exhibits the decoupling property. For the case with jointly sparse signals, we invoke Bayesian inference and propose the "multi-dimensional soft thresholding" algorithm which is posed as a linear programming. Our investigations depict that the proposed algorithm outperforms the conventional 2,1\ell_{2,1}-norm regularized least squares scheme while enjoying a feasible computational complexity.Comment: 5 pages, 3 figures; To be presented at 2018 IEEE International Symposium on Information Theory (ISIT

    Sparse source travel-time tomography of a laboratory target: accuracy and robustness of anomaly detection

    Full text link
    This study concerned conebeam travel-time tomography. The focus was on a sparse distribution of signal sources that can be necessary in a challenging in situ environment such as in asteroid tomography. The goal was to approximate the minimum number of source positions needed for robust detection of refractive anomalies, e.g., voids within an asteroid or a casting defects in concrete. Experimental ultrasonic data were recorded utilizing as a target a 150 mm plastic cast cube containing three stones with diameter between 22 and 41 mm. A signal frequency of 55 kHz (35 mm wavelength) was used. Source counts from one to six were tested for different placements. Based on our statistical inversion approach and analysis of the results, three or four sources were found to lead to reliable inversion. The source configurations investigated were also ranked according to their performance. Our results can be used, for example, in the planning of planetary missions as well as in material testing.Comment: 19 pages, 9 figure

    Channel Estimation and Hybrid Precoding for Distributed Phased Arrays Based MIMO Wireless Communications

    Full text link
    Distributed phased arrays based multiple-input multiple-output (DPA-MIMO) is a newly introduced architecture that enables both spatial multiplexing and beamforming while facilitating highly reconfigurable hardware implementation in millimeter-wave (mmWave) frequency bands. With a DPA-MIMO system, we focus on channel state information (CSI) acquisition and hybrid precoding. As benefited from a coordinated and open-loop pilot beam pattern design, all the sub-arrays can perform channel sounding with less training overhead compared with the traditional orthogonal operation of each sub-array. Furthermore, two sparse channel recovery algorithms, known as joint orthogonal matching pursuit (JOMP) and joint sparse Bayesian learning with 2\ell_2 reweighting (JSBL-2\ell_2), are proposed to exploit the hidden structured sparsity in the beam-domain channel vector. Finally, successive interference cancellation (SIC) based hybrid precoding through sub-array grouping is illustrated for the DPA-MIMO system, which decomposes the joint sub-array RF beamformer design into an interactive per-sub-array-group handle. Simulation results show that the proposed two channel estimators fully take advantage of the partial coupling characteristic of DPA-MIMO channels to perform channel recovery, and the proposed hybrid precoding algorithm is suitable for such array-of-sub-arrays architecture with satisfactory performance and low complexity.Comment: accepted by IEEE Transactions on Vehicular Technolog

    The Random Frequency Diverse Array: A New Antenna Structure for Uncoupled Direction-Range Indication in Active Sensing

    Full text link
    In this paper, we propose a new type of array antenna, termed the Random Frequency Diverse Array (RFDA), for an uncoupled indication of target direction and range with low system complexity. In RFDA, each array element has a narrow bandwidth and a randomly assigned carrier frequency. The beampattern of the array is shown to be stochastic but thumbtack-like, and its stochastic characteristics, such as the mean, variance, and asymptotic distribution are derived analytically. Based on these two features, we propose two kinds of algorithms for signal processing. One is matched filtering, due to the beampattern's good characteristics. The other is compressive sensing, because the new approach can be regarded as a sparse and random sampling of target information in the spatial-frequency domain. Fundamental limits, such as the Cram\'er-Rao bound and the observing matrix's mutual coherence, are provided as performance guarantees of the new array structure. The features and performances of RFDA are verified with numerical results.Comment: 13 pages, 10 figure

    Recovering Model Structures from Large Low Rank and Sparse Covariance Matrix Estimation

    Full text link
    Many popular statistical models, such as factor and random effects models, give arise a certain type of covariance structures that is a summation of low rank and sparse matrices. This paper introduces a penalized approximation framework to recover such model structures from large covariance matrix estimation. We propose an estimator based on minimizing a non-likelihood loss with separable non-smooth penalty functions. This estimator is shown to recover exactly the rank and sparsity patterns of these two components, and thus partially recovers the model structures. Convergence rates under various matrix norms are also presented. To compute this estimator, we further develop a first-order iterative algorithm to solve a convex optimization problem that contains separa- ble non-smooth functions, and the algorithm is shown to produce a solution within O(1/t^2) of the optimal, after any finite t iterations. Numerical performance is illustrated using simulated data and stock portfolio selection on S&P 100.Comment: 35 pages, 3 figures. Presented at JSM 2011 and various invited seminars since February, 2011. R package available from http://cran.r-project.org/web/packages/lorec/index.htm

    Exploiting Restricted Boltzmann Machines and Deep Belief Networks in Compressed Sensing

    Full text link
    This paper proposes a CS scheme that exploits the representational power of restricted Boltzmann machines and deep learning architectures to model the prior distribution of the sparsity pattern of signals belonging to the same class. The determined probability distribution is then used in a maximum a posteriori (MAP) approach for the reconstruction. The parameters of the prior distribution are learned from training data. The motivation behind this approach is to model the higher-order statistical dependencies between the coefficients of the sparse representation, with the final goal of improving the reconstruction. The performance of the proposed method is validated on the Berkeley Segmentation Dataset and the MNIST Database of handwritten digits.Comment: Accepted for publication at IEEE Transactions on Signal Processin
    corecore