9 research outputs found

    Half-Hop: A graph upsampling approach for slowing down message passing

    Full text link
    Message passing neural networks have shown a lot of success on graph-structured data. However, there are many instances where message passing can lead to over-smoothing or fail when neighboring nodes belong to different classes. In this work, we introduce a simple yet general framework for improving learning in message passing neural networks. Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge that can mediate communication between a source and a target node. Our method only modifies the input graph, making it plug-and-play and easy to use with existing models. To understand the benefits of slowing down message passing, we provide theoretical and empirical analyses. We report results on several supervised and self-supervised benchmarks, and show improvements across the board, notably in heterophilic conditions where adjacent nodes are more likely to have different labels. Finally, we show how our approach can be used to generate augmentations for self-supervised learning, where slow nodes are randomly introduced into different edges in the graph to generate multi-scale views with variable path lengths.Comment: Published as a conference paper at ICML 202

    A Unified, Scalable Framework for Neural Population Decoding

    Full text link
    Our ability to use deep learning approaches to decipher neural activity would likely benefit from greater scale, in terms of both model size and datasets. However, the integration of many neural recordings into one unified model is challenging, as each recording contains the activity of different neurons from different individual animals. In this paper, we introduce a training framework and architecture designed to model the population dynamics of neural activity across diverse, large-scale neural recordings. Our method first tokenizes individual spikes within the dataset to build an efficient representation of neural events that captures the fine temporal structure of neural activity. We then employ cross-attention and a PerceiverIO backbone to further construct a latent tokenization of neural population activities. Utilizing this architecture and training framework, we construct a large-scale multi-session model trained on large datasets from seven nonhuman primates, spanning over 158 different sessions of recording from over 27,373 neural units and over 100 hours of recordings. In a number of different tasks, we demonstrate that our pretrained model can be rapidly adapted to new, unseen sessions with unspecified neuron correspondence, enabling few-shot performance with minimal labels. This work presents a powerful new approach for building deep learning tools to analyze neural data and stakes out a clear path to training at scale.Comment: Accepted at NeurIPS 202

    Large-Scale Representation Learning on Graphs via Bootstrapping

    Full text link
    Self-supervised learning provides a promising path towards eliminating the need for costly label information in representation learning on graphs. However, to achieve state-of-the-art performance, methods often need large numbers of negative examples and rely on complex augmentations. This can be prohibitively expensive, especially for large graphs. To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input. BGRL uses only simple augmentations and alleviates the need for contrasting with negative examples, and is thus scalable by design. BGRL outperforms or matches prior methods on several established benchmarks, while achieving a 2-10x reduction in memory costs. Furthermore, we show that BGRL can be scaled up to extremely large graphs with hundreds of millions of nodes in the semi-supervised regime - achieving state-of-the-art performance and improving over supervised baselines where representations are shaped only through label information. In particular, our solution centered on BGRL constituted one of the winning entries to the Open Graph Benchmark - Large Scale Challenge at KDD Cup 2021, on a graph orders of magnitudes larger than all previously available benchmarks, thus demonstrating the scalability and effectiveness of our approach

    Learning Behavior Representations Through Multi-Timescale Bootstrapping

    Full text link
    Natural behavior consists of dynamics that are both unpredictable, can switch suddenly, and unfold over many different timescales. While some success has been found in building representations of behavior under constrained or simplified task-based conditions, many of these models cannot be applied to free and naturalistic settings due to the fact that they assume a single scale of temporal dynamics. In this work, we introduce Bootstrap Across Multiple Scales (BAMS), a multi-scale representation learning model for behavior: we combine a pooling module that aggregates features extracted over encoders with different temporal receptive fields, and design a set of latent objectives to bootstrap the representations in each respective space to encourage disentanglement across different timescales. We first apply our method on a dataset of quadrupeds navigating in different terrain types, and show that our model captures the temporal complexity of behavior. We then apply our method to the MABe 2022 Multi-agent behavior challenge, where our model ranks 3rd overall and 1st on two subtasks, and show the importance of incorporating multi-timescales when analyzing behavior

    Transcriptomic cell type structures in vivo neuronal activity across multiple timescales

    No full text
    Summary: Cell type is hypothesized to be a key determinant of a neuron’s role within a circuit. Here, we examine whether a neuron’s transcriptomic type influences the timing of its activity. We develop a deep-learning architecture that learns features of interevent intervals across timescales (ms to >30 min). We show that transcriptomic cell-class information is embedded in the timing of single neuron activity in the intact brain of behaving animals (calcium imaging and extracellular electrophysiology) as well as in a bio-realistic model of the visual cortex. Further, a subset of excitatory cell types are distinguishable but can be classified with higher accuracy when considering cortical layer and projection class. Finally, we show that computational fingerprints of cell types may be universalizable across structured stimuli and naturalistic movies. Our results indicate that transcriptomic class and type may be imprinted in the timing of single neuron activity across diverse stimuli

    Le cerveau : une nouvelle frontière pour la réanimation

    No full text
    International audienceLe séminaire annuel de la Commission de Recherche Translationnelle de la SRLF a eu lieu à Paris le 3 décembre 2019. Ce séminaire est un moment privilégié d’échange entre cliniciens et scientifiques autour des axes de recherche propres à la réanimation. La sixième édition a portée sur les défis et les promesses inhérents à la recherche translationnelle autour des agressions cérébrales aiguës. Illustrant les dernières avancées dans ce domaine, les chercheurs ont présenté et discuté leurs travaux basés sur des approches complémentaires, allant de l’étude des cellules nerveuses isolées à celui des réseaux cérébraux complexes, situés au carrefour des nombreuses modulations systémiques. Une part importante des présentations a été dédiée aux nouveautés dans le domaine de l’étude du coma et des troubles acquis de la conscience. Des pistes de recherche prometteuses concernant les pathologies neurologiques prises en charge en réanimation, comme le delirium, le traumatisme crânien, les encéphalopathies métaboliques ou auto-immunes ont été aussi discutées. Enfin, nombre d’orateurs ont pu souligner les promesses et les faiblesses des nouvelles technologies actuellement disponibles pour l’étude in vivo de du cerveau humain en état critique

    Energetic dysfunction in sepsis: a narrative review

    No full text
    International audienceBackground Growing evidence associates organ dysfunction(s) with impaired metabolism in sepsis. Recent research has increased our understanding of the role of substrate utilization and mitochondrial dysfunction in the pathophysiology of sepsis-related organ dysfunction. The purpose of this review is to present this evidence as a coherent whole and to highlight future research directions. Main text Sepsis is characterized by systemic and organ-specific changes in metabolism. Alterations of oxygen consumption, increased levels of circulating substrates, impaired glucose and lipid oxidation, and mitochondrial dysfunction are all associated with organ dysfunction and poor outcomes in both animal models and patients. The pathophysiological relevance of bioenergetics and metabolism in the specific examples of sepsis-related immunodeficiency, cerebral dysfunction, cardiomyopathy, acute kidney injury and diaphragmatic failure is also described. Conclusions Recent understandings in substrate utilization and mitochondrial dysfunction may pave the way for new diagnostic and therapeutic approaches. These findings could help physicians to identify distinct subgroups of sepsis and to develop personalized treatment strategies. Implications for their use as bioenergetic targets to identify metabolism- and mitochondria-targeted treatments need to be evaluated in future studies
    corecore