107 research outputs found

    Synaptic actions of amyotrophic-lateral-sclerosis-associated G85R-SOD1 in the squid giant synapse

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Song, Y. Synaptic actions of amyotrophic-lateral-sclerosis-associated G85R-SOD1 in the squid giant synapse. Eneuro, (2020): ENEURO.0369-19.2020, doi: 10.1523/ENEURO.0369-19.2020.Altered synaptic function is thought to play a role in many neurodegenerative diseases, but little is known about the underlying mechanisms for synaptic dysfunction. The squid giant synapse (SGS) is a classical model for studying synaptic electrophysiology and ultrastructure, as well as molecular mechanisms of neurotransmission. Here, we conduct a multidisciplinary study of synaptic actions of misfolded human G85R-SOD1 causing familial Amyotrophic Lateral Sclerosis (fALS). G85R-SOD1, but not WT-SOD1, inhibited synaptic transmission, altered presynaptic ultrastructure, and reduced both the size of the Readily Releasable Pool (RRP) of synaptic vesicles and mobility from the Reserved Pool (RP) to the RRP. Unexpectedly, intermittent high frequency stimulation (iHFS) blocked inhibitory effects of G85R-SOD1 on synaptic transmission, suggesting aberrant Ca2+ signaling may underlie G85R-SOD1 toxicity. Ratiometric Ca2+ imaging showed significantly increased presynaptic Ca2+ induced by G85R-SOD1 that preceded synaptic dysfunction. Chelating Ca2+ using EGTA prevented synaptic inhibition by G85R-SOD1, confirming the role of aberrant Ca2+ in mediating G85R-SOD1 toxicity. These results extended earlier findings in mammalian motor neurons and advanced our understanding by providing possible molecular mechanisms and therapeutic targets for synaptic dysfunctions in ALS as well as a unique model for further studies.Grass Foundation, HHMI, MGH Jack Satter Foundation, Harvard University ALS and Alzheimer's Endowed Research Fund, Harvard Brain Science Initiative

    From Deterministic to Generative: Multi-Modal Stochastic RNNs for Video Captioning

    Full text link
    Video captioning in essential is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, etc. In this paper we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods, that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multi-modal stochastic RNNs networks (MS-RNN), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning, and generate multiple sentences to describe a video considering different random factors. Specifically, a multi-modal LSTM (M-LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM (S-LSTM) is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging datasets MSVD and MSR-VTT show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks
    • …
    corecore