69 research outputs found

    Endogenous Sparse Recovery

    Get PDF
    Sparsity has proven to be an essential ingredient in the development of efficient solutions to a number of problems in signal processing and machine learning. In all of these settings, sparse recovery methods are employed to recover signals that admit sparse representations in a pre-specified basis. Recently, sparse recovery methods have been employed in an entirely new way; instead of finding a sparse representation of a signal in a fixed basis, a sparse representation is formed "from within" the data. In this thesis, we study the utility of this endogenous sparse recovery procedure for learning unions of subspaces from collections of high-dimensional data. We provide new insights into the behavior of endogenous sparse recovery, develop sufficient conditions that describe when greedy methods will reveal local estimates of the subspaces in the ensemble, and introduce new methods to learn unions of overlapping subspaces from local subspace estimates

    Half-Hop: A graph upsampling approach for slowing down message passing

    Full text link
    Message passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message passing neural networks. Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge that can mediate communication between a source and a target node. Our method only modifies the input graph, making it plug-and-play and easy to use with existing models. To understand the benefits of slowing down message passing, we provide theoretical and empirical analyses. We report results on several supervised and self-supervised benchmarks, and show improvements across the board, notably in heterophilic conditions where adjacent nodes are more likely to have different labels. Finally, we show how our approach can be used to generate augmentations for self-supervised learning, where slow nodes are randomly introduced into different edges in the graph to generate multi-scale views with variable path lengths.Comment: Published as a conference paper at ICML 202

    A Unified, Scalable Framework for Neural Population Decoding

    Full text link
    Our ability to use deep learning approaches to decipher neural activity would likely benefit from greater scale, in terms of both model size and datasets. However, the integration of many neural recordings into one unified model is challenging, as each recording contains the activity of different neurons from different individual animals. In this paper, we introduce a training framework and architecture designed to model the population dynamics of neural activity across diverse, large-scale neural recordings. Our method first tokenizes individual spikes within the dataset to build an efficient representation of neural events that captures the fine temporal structure of neural activity. We then employ cross-attention and a PerceiverIO backbone to further construct a latent tokenization of neural population activities. Utilizing this architecture and training framework, we construct a large-scale multi-session model trained on large datasets from seven nonhuman primates, spanning over 158 different sessions of recording from over 27,373 neural units and over 100 hours of recordings. In a number of different tasks, we demonstrate that our pretrained model can be rapidly adapted to new, unseen sessions with unspecified neuron correspondence, enabling few-shot performance with minimal labels. This work presents a powerful new approach for building deep learning tools to analyze neural data and stakes out a clear path to training at scale.Comment: Accepted at NeurIPS 202
    corecore