57 research outputs found

    TcellSubC: An Atlas of the Subcellular Proteome of Human T Cells

    Get PDF
    We have curated an in-depth subcellular proteomic map of primary human CD4+ T cells, divided into cytosolic, nuclear and membrane fractions generated by an optimized fractionation and HiRIEF-LC-MS/MS workflow for limited amounts of primary cells. The subcellular proteome of T cells was mapped under steady state conditions, as well as upon 15 min and 1 h of T cell receptor (TCR) stimulation, respectively. We quantified the subcellular distribution of 6,572 proteins and identified a subset of 237 potentially translocating proteins, including both well-known examples and novel ones. Microscopic validation confirmed the localization of selected proteins with previously known and unknown localization, respectively. We further provide the data in an easy-to-use web platform to facilitate re-use, as the data can be relevant for basic research as well as for clinical exploitation of T cells as therapeutic targets

    Spike-timing-dependent plasticity: common themes and divergent vistas

    Get PDF
    Recent experimental observations of spike-timing-dependent synaptic plasticity (STDP) have revitalized the study of synaptic learning rules. The most surprising aspect of these experiments lies in the observation that synapses activated shortly after the occurrence of a postsynaptic spike are weakened. Thus, synaptic plasticity is sensitive to the temporal ordering of pre- and postsynaptic activation. This temporal asymmetry has been suggested to underlie a range of learning tasks. In the first part of this review we highlight some of the common themes from a range of findings in the framework of predictive coding.As an example of how this principle can be used in a learning task, we discuss a recent model of cortical map formation. In the second part of the review, we point out some of the differences in STDP models and their functional consequences. We discuss how differences in the weight-dependence, the time-constants and the non-linear properties of learning rules give rise to distinct computational functions. In light of these computational issues raised, we review current experimental findings and suggest further experiments to resolve some controversies

    Evolving Neural Networks through a Reverse Encoding Tree

    Full text link
    NeuroEvolution is one of the most competitive evolutionary learning frameworks for designing novel neural networks for use in specific tasks, such as logic circuit design and digital gaming. However, the application of benchmark methods such as the NeuroEvolution of Augmenting Topologies (NEAT) remains a challenge, in terms of their computational cost and search time inefficiency. This paper advances a method which incorporates a type of topological edge coding, named Reverse Encoding Tree (RET), for evolving scalable neural networks efficiently. Using RET, two types of approaches -- NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search encoding (GS-NEAT) -- have been designed to solve problems in benchmark continuous learning environments such as logic gates, Cartpole, and Lunar Lander, and tested against classical NEAT and FS-NEAT as baselines. Additionally, we conduct a robustness test to evaluate the resilience of the proposed NEAT algorithms. The results show that the two proposed strategies deliver improved performance, characterized by (1) a higher accumulated reward within a finite number of time steps; (2) using fewer episodes to solve problems in targeted environments, and (3) maintaining adaptive robustness under noisy perturbations, which outperform the baselines in all tested cases. Our analysis also demonstrates that RET expends potential future research directions in dynamic environments. Code is available from https://github.com/HaolingZHANG/ReverseEncodingTree.Comment: Accepted to IEEE Congress on Evolutionary Computation (IEEE CEC) 2020. Lecture Presentatio

    A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model

    Full text link
    In this work, we explore Parameter-Efficient-Learning (PEL) techniques to repurpose a General-Purpose-Speech (GSM) model for Arabic dialect identification (ADI). Specifically, we investigate different setups to incorporate trainable features into a multi-layer encoder-decoder GSM formulation under frozen pre-trained settings. Our architecture includes residual adapter and model reprogramming (input-prompting). We design a token-level label mapping to condition the GSM for Arabic Dialect Identification (ADI). This is challenging due to the high variation in vocabulary and pronunciation among the numerous regional dialects. We achieve new state-of-the-art accuracy on the ADI-17 dataset by vanilla fine-tuning. We further reduce the training budgets with the PEL method, which performs within 1.86% accuracy to fine-tuning using only 2.5% of (extra) network trainable parameters. Our study demonstrates how to identify Arabic dialects using a small dataset and limited computation with open source code and pre-trained models.Comment: Accepted to Interspeech. Code is available at: https://github.com/Srijith-rkr/KAUST-Whisper-Adapter under MIT licens

    Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding

    Full text link
    Performing driving behaviors based on causal reasoning is essential to ensure driving safety. In this work, we investigated how state-of-the-art 3D Convolutional Neural Networks (CNNs) perform on classifying driving behaviors based on causal reasoning. We proposed a perturbation-based visual explanation method to inspect the models' performance visually. By examining the video attention saliency, we found that existing models could not precisely capture the causes (e.g., traffic light) of the specific action (e.g., stopping). Therefore, the Temporal Reasoning Block (TRB) was proposed and introduced to the models. With the TRB models, we achieved the accuracy of 86.3%\mathbf{86.3\%}, which outperform the state-of-the-art 3D CNNs from previous works. The attention saliency also demonstrated that TRB helped models focus on the causes more precisely. With both numerical and visual evaluations, we concluded that our proposed TRB models were able to provide accurate driving behavior prediction by learning the causal reasoning of the behaviors.Comment: Submitted to IEEE ICASSP 2020; Pytorch code will be released soo

    IHCV: Discovery of Hidden Time-Dependent Control Variables in Non-Linear Dynamical Systems

    Full text link
    Discovering non-linear dynamical models from data is at the core of science. Recent progress hinges upon sparse regression of observables using extensive libraries of candidate functions. However, it remains challenging to model hidden non-observable control variables governing switching between different dynamical regimes. Here we develop a data-efficient derivative-free method, IHCV, for the Identification of Hidden Control Variables. First, the performance and robustness of IHCV against noise are evaluated by benchmarking the IHCV method using well-known bifurcation models (saddle-node, transcritical, pitchfork, Hopf). Next, we demonstrate that IHCV discovers hidden driver variables in the Lorenz, van der Pol, Hodgkin-Huxley, and Fitzhugh-Nagumo models. Finally, IHCV generalizes to the case when only partial observational is given, as demonstrated using the toggle switch model, the genetic repressilator oscillator, and a Waddington landscape model. Our proof-of-principle illustrates that utilizing normal forms could facilitate the data-efficient and scalable discovery of hidden variables controlling transitions between different dynamical regimes and non-linear models.Comment: 12 pages, 2 figure
    corecore