12 research outputs found

    Iterative Log Thresholding

    Full text link
    Sparse reconstruction approaches using the re-weighted l1-penalty have been shown, both empirically and theoretically, to provide a significant improvement in recovering sparse signals in comparison to the l1-relaxation. However, numerical optimization of such penalties involves solving problems with l1-norms in the objective many times. Using the direct link of reweighted l1-penalties to the concave log-regularizer for sparsity, we derive a simple prox-like algorithm for the log-regularized formulation. The proximal splitting step of the algorithm has a closed form solution, and we call the algorithm 'log-thresholding' in analogy to soft thresholding for the l1-penalty. We establish convergence results, and demonstrate that log-thresholding provides more accurate sparse reconstructions compared to both soft and hard thresholding. Furthermore, the approach can be directly extended to optimization over matrices with penalty for rank (i.e. the nuclear norm penalty and its re-weigthed version), where we suggest a singular-value log-thresholding approach.Comment: 5 pages, 4 figure

    Successive Concave Sparsity Approximation for Compressed Sensing

    Full text link
    In this paper, based on a successively accuracy-increasing approximation of the â„“0\ell_0 norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the â„“0\ell_0 norm can be controlled. We prove that the series of the approximations asymptotically coincides with the â„“1\ell_1 and â„“0\ell_0 norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted â„“1\ell_1 minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.Comment: Submitted to IEEE Trans. on Signal Processin

    Representation Learning via Cauchy Convolutional Sparse Coding

    Get PDF
    In representation learning, Convolutional Sparse Coding (CSC) enables unsupervised learning of features by jointly optimising both an â„“2\ell_2-norm fidelity term and a sparsity enforcing penalty. This work investigates using a regularisation term derived from an assumed Cauchy prior for the coefficients of the feature maps of a CSC generative model. The sparsity penalty term resulting from this prior is solved via its proximal operator, which is then applied iteratively, element-wise, on the coefficients of the feature maps to optimise the CSC cost function. The performance of the proposed Iterative Cauchy Thresholding (ICT) algorithm in reconstructing natural images is compared against the common choice of â„“1\ell_1-norm optimised via soft and hard thresholding. ICT outperforms IHT and IST in most of these reconstruction experiments across various datasets, with an average PSNR of up to 11.30 and 7.04 above ISTA and IHT respectively.Comment: 19 pages, 9 figures, journal draf

    Representation Learning via Cauchy Convolutional Sparse Coding

    Get PDF
    In representation learning, Convolutional Sparse Coding (CSC) enables unsupervised learning of features by jointly optimising both an â„“2\ell_2-norm fidelity term and a sparsity enforcing penalty. This work investigates using a regularisation term derived from an assumed Cauchy prior for the coefficients of the feature maps of a CSC generative model. The sparsity penalty term resulting from this prior is solved via its proximal operator, which is then applied iteratively, element-wise, on the coefficients of the feature maps to optimise the CSC cost function. The performance of the proposed Iterative Cauchy Thresholding (ICT) algorithm in reconstructing natural images is compared against the common choice of â„“1\ell_1-norm optimised via soft and hard thresholding. ICT outperforms IHT and IST in most of these reconstruction experiments across various datasets, with an average PSNR of up to 11.30 and 7.04 above ISTA and IHT respectively.Comment: 19 pages, 9 figures, journal draf

    A New Computational Method for the Sparsest Solutions to Systems of Linear Equations

    Get PDF

    Deep Learning Designs for Physical Layer Communications

    Get PDF
    Wireless communication systems and their underlying technologies have undergone unprecedented advances over the last two decades to assuage the ever-increasing demands for various applications and emerging technologies. However, the traditional signal processing schemes and algorithms for wireless communications cannot handle the upsurging complexity associated with fifth-generation (5G) and beyond communication systems due to network expansion, new emerging technologies, high data rate, and the ever-increasing demands for low latency. This thesis extends the traditional downlink transmission schemes to deep learning-based precoding and detection techniques that are hardware-efficient and of lower complexity than the current state-of-the-art. The thesis focuses on: precoding/beamforming in massive multiple-inputs-multiple-outputs (MIMO), signal detection and lightweight neural network (NN) architectures for precoder and decoder designs. We introduce a learning-based precoder design via constructive interference (CI) that performs the precoding on a symbol-by-symbol basis. Instead of conventionally training a NN without considering the specifics of the optimisation objective, we unfold a power minimisation symbol level precoding (SLP) formulation based on the interior-point-method (IPM) proximal ‘log’ barrier function. Furthermore, we propose a concept of NN compression, where the weights are quantised to lower numerical precision formats based on binary and ternary quantisations. We further introduce a stochastic quantisation technique, where parts of the NN weight matrix are quantised while the remaining is not. Finally, we propose a systematic complexity scaling of deep neural network (DNN) based MIMO detectors. The model uses a fraction of the DNN inputs by scaling their values through weights that follow monotonically non-increasing functions. Furthermore, we investigate performance complexity tradeoffs via regularisation constraints on the layer weights such that, at inference, parts of network layers can be removed with minimal impact on the detection accuracy. Simulation results show that our proposed learning-based techniques offer better complexity-vs-BER (bit-error-rate) and complexity-vs-transmit power performances compared to the state-of-the-art MIMO detection and precoding techniques

    Sparse Deconvolution with Applications to Spike Sorting

    Get PDF
    Chronic extracellular recording is the use of implanted electrodes to measure the electrical activity of nearby neurons over a long period of time. It presents an unparalleled view of neural activity over a broad range of time scales, offering sub-millisecond resolution of single action potentials while also allowing for continuous recording over the course of many months. These recordings pick up a rich collection of neural phenomena -- spikes, ripples, and theta oscillations, to name a few -- that can elucidate the activity of individual neurons and local circuits. However, this also presents an interesting challenge for data analysis. Chronic extracellular recordings contain overlapping signals from multiple sources, requiring these signals to be detected and classified before they can be properly analyzed. The combination of fine temporal resolution with long recording durations produces large datasets, requiring efficient algorithms that can operate at scale. In this thesis, I consider the problem of spike sorting: detecting spikes (the extracellular signatures of individual neurons' action potentials) and clustering them according to their putative source. First, I introduce a sparse deconvolution approach to spike detection, which seeks to detect spikes and represent them as the linear combination of basis waveforms. This approach is able to separate overlapping spikes without the need for source templates, and produces an output that can be used with a variety of clustering algorithms. Second, I introduce a clustering algorithm based around a mixture of drifting t-distributions. This model captures two features of chronic extracellular recordings -- cluster drift over time and heavy-tailed residuals in the distribution of spikes -- that are missing from previous models. This enables us to reliably track individual neurons over longer periods of time. I will also show that this model produces more accurate estimates of classification error, which is an important component to proper interpretation of the spike sorting output. Finally, I present a few theoretical results that may assist in the efficient implementation of sparse deconvolution.</p
    corecore