128,186 research outputs found

    Jointly Sparse Support Recovery via Deep Auto-encoder with Applications in MIMO-based Grant-Free Random Access for mMTC

    Full text link
    In this paper, a data-driven approach is proposed to jointly design the common sensing (measurement) matrix and jointly support recovery method for complex signals, using a standard deep auto-encoder for real numbers. The auto-encoder in the proposed approach includes an encoder that mimics the noisy linear measurement process for jointly sparse signals with a common sensing matrix, and a decoder that approximately performs jointly sparse support recovery based on the empirical covariance matrix of noisy linear measurements. The proposed approach can effectively utilize the feature of common support and properties of sparsity patterns to achieve high recovery accuracy, and has significantly shorter computation time than existing methods. We also study an application example, i.e., device activity detection in Multiple-Input Multiple-Output (MIMO)-based grant-free random access for massive machine type communications (mMTC). The numerical results show that the proposed approach can provide pilot sequences and device activity detection with better detection accuracy and substantially shorter computation time than well-known recovery methods.Comment: 5 pages, 8 figures, to be publised in IEEE SPAWC 2020. arXiv admin note: text overlap with arXiv:2002.0262

    Optimized Sparse Matrix Operations for Reverse Mode Automatic Differentiation

    Full text link
    Sparse matrix representations are ubiquitous in computational science and machine learning, leading to significant reductions in compute time, in comparison to dense representation, for problems that have local connectivity. The adoption of sparse representation in leading ML frameworks such as PyTorch is incomplete, however, with support for both automatic differentiation and GPU acceleration missing. In this work, we present an implementation of a CSR-based sparse matrix wrapper for PyTorch with CUDA acceleration for basic matrix operations, as well as automatic differentiability. We also present several applications of the resulting sparse kernels to optimization problems, demonstrating ease of implementation and performance measurements versus their dense counterparts

    ϵ-sparse representations : generalized sparse approximation and the equivalent family of SVM tasks

    Get PDF
    Relation between a family of generalized Support Vector Machine (SVM) problems and the novel ϵ-sparse representation is provided. In defining ϵ-sparse representations, we use a natural generalization of the classical ϵ-insensitive cost function for vectors. The insensitive parameter of the SVM problem is transformed into component-wise insensitivity and thus overall sparsification is replaced by component-wise sparsification. The connection between these two problems is built through the generalized Moore-Penrose inverse of the Gram matrix associated to the kernel

    Cholesky-factorized sparse Kernel in support vector machines

    Get PDF
    Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR. The approach uses an RBF kernel with a sparse matrix which is factorized using Cholesky decomposition. The method is tested on large artificial and real data sets and compared to the standard RBF and linear kernels where both the accuracy and training time are reported. For most data sets, the result shows a huge training time reduction, over 90\%, whilst maintaining the accuracy

    Accelerating Relevance-Vector-Machine-Based Classification of Hyperspectral Image with Parallel Computing

    Get PDF
    Benefiting from the kernel skill and the sparse property, the relevance vector machine (RVM) could acquire a sparse solution, with an equivalent generalization ability compared with the support vector machine. The sparse property requires much less time in the prediction, making RVM potential in classifying the large-scale hyperspectral image. However, RVM is not widespread influenced by its slow training procedure. To solve the problem, the classification of the hyperspectral image using RVM is accelerated by the parallel computing technique in this paper. The parallelization is revealed from the aspects of the multiclass strategy, the ensemble of multiple weak classifiers, and the matrix operations. The parallel RVMs are implemented using the C language plus the parallel functions of the linear algebra packages and the message passing interface library. The proposed methods are evaluated by the AVIRIS Indian Pines data set on the Beowulf cluster and the multicore platforms. It shows that the parallel RVMs accelerate the training procedure obviously

    Support matrix machine: A review

    Full text link
    Support vector machine (SVM) is one of the most studied paradigms in the realm of machine learning for classification and regression problems. It relies on vectorized input data. However, a significant portion of the real-world data exists in matrix format, which is given as input to SVM by reshaping the matrices into vectors. The process of reshaping disrupts the spatial correlations inherent in the matrix data. Also, converting matrices into vectors results in input data with a high dimensionality, which introduces significant computational complexity. To overcome these issues in classifying matrix input data, support matrix machine (SMM) is proposed. It represents one of the emerging methodologies tailored for handling matrix input data. The SMM method preserves the structural information of the matrix data by using the spectral elastic net property which is a combination of the nuclear norm and Frobenius norm. This article provides the first in-depth analysis of the development of the SMM model, which can be used as a thorough summary by both novices and experts. We discuss numerous SMM variants, such as robust, sparse, class imbalance, and multi-class classification models. We also analyze the applications of the SMM model and conclude the article by outlining potential future research avenues and possibilities that may motivate academics to advance the SMM algorithm
    corecore