2,224 research outputs found

    Sparse Signal Processing Concepts for Efficient 5G System Design

    Full text link
    As it becomes increasingly apparent that 4G will not be able to meet the emerging demands of future mobile communication systems, the question what could make up a 5G system, what are the crucial challenges and what are the key drivers is part of intensive, ongoing discussions. Partly due to the advent of compressive sensing, methods that can optimally exploit sparsity in signals have received tremendous attention in recent years. In this paper we will describe a variety of scenarios in which signal sparsity arises naturally in 5G wireless systems. Signal sparsity and the associated rich collection of tools and algorithms will thus be a viable source for innovation in 5G wireless system design. We will discribe applications of this sparse signal processing paradigm in MIMO random access, cloud radio access networks, compressive channel-source network coding, and embedded security. We will also emphasize important open problem that may arise in 5G system design, for which sparsity will potentially play a key role in their solution.Comment: 18 pages, 5 figures, accepted for publication in IEEE Acces

    Incremental and Adaptive L1-Norm Principal Component Analysis: Novel Algorithms and Applications

    Get PDF
    L1-norm Principal-Component Analysis (L1-PCA) is known to attain remarkable resistance against faulty/corrupted points among the processed data. However, computing L1-PCA of “big data” with large number of measurements and/or dimensions may be computationally impractical. This work proposes new algorithmic solutions for incremental and adaptive L1-PCA. The first algorithm computes L1-PCA incrementally, processing one measurement at a time, with very low computational and memory requirements; thus, it is appropriate for big data and big streaming data applications. The second algorithm combines the merits of the first one with additional ability to track changes in the nominal signal subspace by revising the computed L1-PCA as new measurements arrive, demonstrating both robustness against outliers and adaptivity to signal-subspace changes. The proposed algorithms are evaluated in an array of experimental studies on subspace estimation, video surveillance (foreground/background separation), image conditioning, and direction-of-arrival (DoA) estimation

    Side information in robust principal component analysis: algorithms and applications

    Get PDF
    Dimensionality reduction and noise removal are fundamental machine learning tasks that are vital to artificial intelligence applications. Principal component analysis has long been utilised in computer vision to achieve the above mentioned goals. Recently, it has been enhanced in terms of robustness to outliers in robust principal component analysis. Both convex and non-convex programs have been developed to solve this new formulation, some with exact convergence guarantees. Its effectiveness can be witnessed in image and video applications ranging from image denoising and alignment to background separation and face recognition. However, robust principal component analysis is by no means perfect. This dissertation identifies its limitations, explores various promising options for improvement and validates the proposed algorithms on both synthetic and real-world datasets. Common algorithms approximate the NP-hard formulation of robust principal component analysis with convex envelopes. Though under certain assumptions exact recovery can be guaranteed, the relaxation margin is too big to be squandered. In this work, we propose to apply gradient descent on the Burer-Monteiro bilinear matrix factorisation to squeeze this margin given available subspaces. This non-convex approach improves upon conventional convex approaches both in terms of accuracy and speed. On the other hand, oftentimes there is accompanying side information when an observation is made. The ability to assimilate such auxiliary sources of data can ameliorate the recovery process. In this work, we investigate in-depth such possibilities for incorporating side information in restoring the true underlining low-rank component from gross sparse noise. Lastly, tensors, also known as multi-dimensional arrays, represent real-world data more naturally than matrices. It is thus advantageous to adapt robust principal component analysis to tensors. Since there is no exact equivalence between tensor rank and matrix rank, we employ the notions of Tucker rank and CP rank as our optimisation objectives. Overall, this dissertation carefully defines the problems when facing real-world computer vision challenges, extensively and impartially evaluates the state-of-the-art approaches, proposes novel solutions and provides sufficient validations on both simulated data and popular real-world datasets for various mainstream computer vision tasks.Open Acces

    Topology optimization for inverse magnetostatics as sparse regression: application to electromagnetic coils for stellarators

    Full text link
    Topology optimization, a technique to determine where material should be placed within a predefined volume in order to minimize a physical objective, is used across a wide range of scientific fields and applications. A general application for topology optimization is inverse magnetostatics; a desired magnetic field is prescribed, and a distribution of steady currents is computed to produce that target field. In the present work, electromagnetic coils are designed by magnetostatic topology optimization, using volume elements (voxels) of electric current, constrained so the current is divergence-free. Compared to standard electromagnet shape optimization, our method has the advantage that the nonlinearity in the Biot-Savart law with respect to position is avoided, enabling convex cost functions and a useful reformulation of topology optimization as sparse regression. To demonstrate, we consider the application of designing electromagnetic coils for a class of plasma experiments known as stellarators. We produce topologically-exotic coils for several new stellarator designs and show that these solutions can be interpolated into a filamentary representation and then further optimized

    Support matrix machine: A review

    Full text link
    Support vector machine (SVM) is one of the most studied paradigms in the realm of machine learning for classification and regression problems. It relies on vectorized input data. However, a significant portion of the real-world data exists in matrix format, which is given as input to SVM by reshaping the matrices into vectors. The process of reshaping disrupts the spatial correlations inherent in the matrix data. Also, converting matrices into vectors results in input data with a high dimensionality, which introduces significant computational complexity. To overcome these issues in classifying matrix input data, support matrix machine (SMM) is proposed. It represents one of the emerging methodologies tailored for handling matrix input data. The SMM method preserves the structural information of the matrix data by using the spectral elastic net property which is a combination of the nuclear norm and Frobenius norm. This article provides the first in-depth analysis of the development of the SMM model, which can be used as a thorough summary by both novices and experts. We discuss numerous SMM variants, such as robust, sparse, class imbalance, and multi-class classification models. We also analyze the applications of the SMM model and conclude the article by outlining potential future research avenues and possibilities that may motivate academics to advance the SMM algorithm

    Laterally constrained low-rank seismic data completion via cyclic-shear transform

    Full text link
    A crucial step in seismic data processing consists in reconstructing the wavefields at spatial locations where faulty or absent sources and/or receivers result in missing data. Several developments in seismic acquisition and interpolation strive to restore signals fragmented by sampling limitations; still, seismic data frequently remain poorly sampled in the source, receiver, or both coordinates. An intrinsic limitation of real-life dense acquisition systems, which are often exceedingly expensive, is that they remain unable to circumvent various physical and environmental obstacles, ultimately hindering a proper recording scheme. In many situations, when the preferred reconstruction method fails to render the actual continuous signals, subsequent imaging studies are negatively affected by sampling artefacts. A recent alternative builds on low-rank completion techniques to deliver superior restoration results on seismic data, paving the way for data kernel compression that can potentially unlock multiple modern processing methods so far prohibited in 3D field scenarios. In this work, we propose a novel transform domain revealing the low-rank character of seismic data that prevents the inherent matrix enlargement introduced when the data are sorted in the midpoint-offset domain and develop a robust extension of the current matrix completion framework to account for lateral physical constraints that ensure a degree of proximity similarity among neighbouring points. Our strategy successfully interpolates missing sources and receivers simultaneously in synthetic and field data

    Sparse Proteomics Analysis - A compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data

    Get PDF
    Background: High-throughput proteomics techniques, such as mass spectrometry (MS)-based approaches, produce very high-dimensional data-sets. In a clinical setting one is often interested in how mass spectra differ between patients of different classes, for example spectra from healthy patients vs. spectra from patients having a particular disease. Machine learning algorithms are needed to (a) identify these discriminating features and (b) classify unknown spectra based on this feature set. Since the acquired data is usually noisy, the algorithms should be robust against noise and outliers, while the identified feature set should be as small as possible. Results: We present a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets. We show (1) how our method performs on artificial and real-world data-sets, (2) that its performance is competitive with standard (and widely used) algorithms for analyzing proteomics data, and (3) that it is robust against random and systematic noise. We further demonstrate the applicability of our algorithm to two previously published clinical data-sets
    corecore