1,147 research outputs found

    Group Iterative Spectrum Thresholding for Super-Resolution Sparse Spectral Selection

    Full text link
    Recently, sparsity-based algorithms are proposed for super-resolution spectrum estimation. However, to achieve adequately high resolution in real-world signal analysis, the dictionary atoms have to be close to each other in frequency, thereby resulting in a coherent design. The popular convex compressed sensing methods break down in presence of high coherence and large noise. We propose a new regularization approach to handle model collinearity and obtain parsimonious frequency selection simultaneously. It takes advantage of the pairing structure of sine and cosine atoms in the frequency dictionary. A probabilistic spectrum screening is also developed for fast computation in high dimensions. A data-resampling version of high-dimensional Bayesian Information Criterion is used to determine the regularization parameters. Experiments show the efficacy and efficiency of the proposed algorithms in challenging situations with small sample size, high frequency resolution, and low signal-to-noise ratio

    Reweighted lp Constraint LMS-Based Adaptive Sparse Channel Estimation for Cooperative Communication System

    Get PDF
    This paper studies the issue of sparsity adaptive channel reconstruction in time-varying cooperative communication networks through the amplify-and-forward transmission scheme. A new sparsity adaptive system identification method is proposed, namely reweighted norm ( < < ) penalized least mean square(LMS)algorithm. The main idea of the algorithm is to add a norm penalty of sparsity into the cost function of the LMS algorithm. By doing so, the weight factor becomes a balance parameter of the associated norm adaptive sparse system identification. Subsequently, the steady state of the coefficient misalignment vector is derived theoretically, with a performance upper bounds provided which serve as a sufficient condition for the LMS channel estimation of the precise reweighted norm. With the upper bounds, we prove that the ( < < ) norm sparsity inducing cost function is superior to the reweighted norm. An optimal selection of for the norm problem is studied to recover various sparse channel vectors. Several experiments verify that the simulation results agree well with the theoretical analysis, and thus demonstrate that the proposed algorithm has a better convergence speed and better steady state behavior than other LMS algorithms

    A sparse regulatory network of copy-number driven expression reveals putative breast cancer oncogenes

    Full text link
    The influence of DNA cis-regulatory elements on a gene's expression has been intensively studied. However, little is known about expressions driven by trans-acting DNA hotspots. DNA hotspots harboring copy number aberrations are recognized to be important in cancer as they influence multiple genes on a global scale. The challenge in detecting trans-effects is mainly due to the computational difficulty in detecting weak and sparse trans-acting signals amidst co-occuring passenger events. We propose an integrative approach to learn a sparse interaction network of DNA copy-number regions with their downstream targets in a breast cancer dataset. Information from this network helps distinguish copy-number driven from copy-number independent expression changes on a global scale. Our result further delineates cis- and trans-effects in a breast cancer dataset, for which important oncogenes such as ESR1 and ERBB2 appear to be highly copy-number dependent. Further, our model is shown to be efficient and in terms of goodness of fit no worse than other state-of the art predictors and network reconstruction models using both simulated and real data.Comment: Accepted at IEEE International Conference on Bioinformatics & Biomedicine (BIBM 2010

    Scalable Sparse Cox's Regression for Large-Scale Survival Data via Broken Adaptive Ridge

    Full text link
    This paper develops a new scalable sparse Cox regression tool for sparse high-dimensional massive sample size (sHDMSS) survival data. The method is a local L0L_0-penalized Cox regression via repeatedly performing reweighted L2L_2-penalized Cox regression. We show that the resulting estimator enjoys the best of L0L_0- and L2L_2-penalized Cox regressions while overcoming their limitations. Specifically, the estimator is selection consistent, oracle for parameter estimation, and possesses a grouping property for highly correlated covariates. Simulation results suggest that when the sample size is large, the proposed method with pre-specified tuning parameters has a comparable or better performance than some popular penalized regression methods. More importantly, because the method naturally enables adaptation of efficient algorithms for massive L2L_2-penalized optimization and does not require costly data driven tuning parameter selection, it has a significant computational advantage for sHDMSS data, offering an average of 5-fold speedup over its closest competitor in empirical studies
    • …
    corecore