10,525 research outputs found

    Bayesian Optimization of Text Representations

    Full text link
    When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. These choices can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We propose an approach to optimizing over this space of choices, formulating the problem as global optimization. We apply a sequential model-based optimization technique and show that our method makes standard linear models competitive with more sophisticated, expensive state-of-the-art methods based on latent variable models or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning

    COSINE: Compressive Network Embedding on Large-scale Information Networks

    Full text link
    There is recently a surge in approaches that learn low-dimensional embeddings of nodes in networks. As there are many large-scale real-world networks, it's inefficient for existing approaches to store amounts of parameters in memory and update them edge after edge. With the knowledge that nodes having similar neighborhood will be close to each other in embedding space, we propose COSINE (COmpresSIve NE) algorithm which reduces the memory footprint and accelerates the training process by parameters sharing among similar nodes. COSINE applies graph partitioning algorithms to networks and builds parameter sharing dependency of nodes based on the result of partitioning. With parameters sharing among similar nodes, COSINE injects prior knowledge about higher structural information into training process which makes network embedding more efficient and effective. COSINE can be applied to any embedding lookup method and learn high-quality embeddings with limited memory and shorter training time. We conduct experiments of multi-label classification and link prediction, where baselines and our model have the same memory usage. Experimental results show that COSINE gives baselines up to 23% increase on classification and up to 25% increase on link prediction. Moreover, time of all representation learning methods using COSINE decreases from 30% to 70%

    Optimal Sensor Placement and Enhanced Sparsity for Classification

    Full text link
    The goal of compressive sensing is efficient reconstruction of data from few measurements, sometimes leading to a categorical decision. If only classification is required, reconstruction can be circumvented and the measurements needed are orders-of-magnitude sparser still. We define enhanced sparsity as the reduction in number of measurements required for classification over reconstruction. In this work, we exploit enhanced sparsity and learn spatial sensor locations that optimally inform a categorical decision. The algorithm solves an l1-minimization to find the fewest entries of the full measurement vector that exactly reconstruct the discriminant vector in feature space. Once the sensor locations have been identified from the training data, subsequent test samples are classified with remarkable efficiency, achieving performance comparable to that obtained by discrimination using the full image. Sensor locations may be learned from full images, or from a random subsample of pixels. For classification between more than two categories, we introduce a coupling parameter whose value tunes the number of sensors selected, trading accuracy for economy. We demonstrate the algorithm on example datasets from image recognition using PCA for feature extraction and LDA for discrimination; however, the method can be broadly applied to non-image data and adapted to work with other methods for feature extraction and discrimination.Comment: 13 pages, 11 figure

    Compressive hyperspectral imaging via adaptive sampling and dictionary learning

    Full text link
    In this paper, we propose a new sampling strategy for hyperspectral signals that is based on dictionary learning and singular value decomposition (SVD). Specifically, we first learn a sparsifying dictionary from training spectral data using dictionary learning. We then perform an SVD on the dictionary and use the first few left singular vectors as the rows of the measurement matrix to obtain the compressive measurements for reconstruction. The proposed method provides significant improvement over the conventional compressive sensing approaches. The reconstruction performance is further improved by reconditioning the sensing matrix using matrix balancing. We also demonstrate that the combination of dictionary learning and SVD is robust by applying them to different datasets

    Towards Understanding the Invertibility of Convolutional Neural Networks

    Full text link
    Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable re- construction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios

    Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling

    Full text link
    Linear encoding of sparse vectors is widely popular, but is commonly data-independent -- missing any possible extra (but a priori unknown) structure beyond sparsity. In this paper we present a new method to learn linear encoders that adapt to data, while still performing well with the widely used â„“1\ell_1 decoder. The convex â„“1\ell_1 decoder prevents gradient propagation as needed in standard gradient-based training. Our method is based on the insight that unrolling the convex decoder into TT projected subgradient steps can address this issue. Our method can be seen as a data-driven way to learn a compressed sensing measurement matrix. We compare the empirical performance of 10 algorithms over 6 sparse datasets (3 synthetic and 3 real). Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1.1-3x) compared to the previous state-of-the-art methods. We illustrate an application of our method in learning label embeddings for extreme multi-label classification, and empirically show that our method is able to match or outperform the precision scores of SLEEC, which is one of the state-of-the-art embedding-based approaches.Comment: 17 pages, 7 tables, 8 figures, published in ICML 2019; part of this work was done while Shanshan was an intern at Google Research, New Yor

    Simple Classification using Binary Data

    Full text link
    Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design. In this work, we study the problem of data classification from binary data and propose a framework with low computation and resource costs. We illustrate the utility of the proposed approach through stylized and realistic numerical experiments, and provide a theoretical analysis for a simple case. We hope that our framework and analysis will serve as a foundation for studying similar types of approaches

    Deep AutoEncoder-based Lossy Geometry Compression for Point Clouds

    Full text link
    Point cloud is a fundamental 3D representation which is widely used in real world applications such as autonomous driving. As a newly-developed media format which is characterized by complexity and irregularity, point cloud creates a need for compression algorithms which are more flexible than existing codecs. Recently, autoencoders(AEs) have shown their effectiveness in many visual analysis tasks as well as image compression, which inspires us to employ it in point cloud compression. In this paper, we propose a general autoencoder-based architecture for lossy geometry point cloud compression. To the best of our knowledge, it is the first autoencoder-based geometry compression codec that directly takes point clouds as input rather than voxel grids or collections of images. Compared with handcrafted codecs, this approach adapts much more quickly to previously unseen media contents and media formats, meanwhile achieving competitive performance. Our architecture consists of a pointnet-based encoder, a uniform quantizer, an entropy estimation block and a nonlinear synthesis transformation module. In lossy geometry compression of point cloud, results show that the proposed method outperforms the test model for categories 1 and 3 (TMC13) published by MPEG-3DG group on the 125th meeting, and on average a 73.15\% BD-rate gain is achieved

    Nonlinear Model Reduction for Complex Systems using Sparse Optimal Sensor Locations from Learned Nonlinear Libraries

    Full text link
    We demonstrate the synthesis of sparse sampling and machine learning to characterize and model complex, nonlinear dynamical systems over a range of bifurcation parameters. First, we construct modal libraries using the classical proper orthogonal decomposition to uncover dominant low-rank coherent structures. Here, nonlinear libraries are also constructed in order to take advantage of the discrete empirical interpolation method and projection that allows for the approximation of nonlinear terms in a low-dimensional way. The selected sampling points are shown to be nearly optimal sensing locations for characterizing the underlying dynamics, stability, and bifurcations of complex systems. The use of empirical interpolation points and sparse representation facilitate a family of local reduced-order models for each physical regime, rather than a higher-order global model, which has the benefit of physical interpretability of energy transfer between coherent structures. In particular, the discrete interpolation points and nonlinear modal libraries are used for sparse representation to classify the dynamic bifurcation regime in the complex Ginzburg-Landau equation. It is shown that nonlinear point measurements are more effective than linear measurements when sensor noise is present.Comment: 10 pages, 6 figure

    Robust flow field reconstruction from limited measurements via sparse representation

    Full text link
    In many applications it is important to estimate a fluid flow field from limited and possibly corrupt measurements. Current methods in flow estimation often use least squares regression to reconstruct the flow field, finding the minimum-energy solution that is consistent with the measured data. However, this approach may be prone to overfitting and sensitive to noise. To address these challenges we instead seek a sparse representation of the data in a library of examples. Sparse representation has been widely used for image recognition and reconstruction, and it is well-suited to structured data with limited, corrupt measurements. We explore sparse representation for flow reconstruction on a variety of fluid data sets with a wide range of complexity, including vortex shedding past a cylinder at low Reynolds number, a mixing layer, and two geophysical flows. In addition, we compare several measurement strategies and consider various types of noise and corruption over a range of intensities. We find that sparse representation has considerably improved estimation accuracy and robustness to noise and corruption compared with least squares methods. We also introduce a sparse estimation procedure on local spatial patches for complex multiscale flows that preclude a global sparse representation. Based on these results, sparse representation is a promising framework for extracting useful information from complex flow fields with realistic measurements
    • …
    corecore