197 research outputs found

    Medial Ganglionic Eminence Progenitors Transplanted into Hippocampus Integrate in a Functional and Subtype-Appropriate Manner.

    Get PDF
    Medial ganglionic eminence (MGE) transplantation rescues disease phenotypes in various preclinical models with interneuron deficiency or dysfunction, including epilepsy. While underlying mechanism(s) remains unclear to date, a simple explanation is that appropriate synaptic integration of MGE-derived interneurons elevates GABA-mediated inhibition and modifies the firing activity of excitatory neurons in the host brain. However, given the complexity of interneurons and potential for transplant-derived interneurons to integrate or alter the host network in unexpected ways, it remains unexplored whether synaptic connections formed by transplant-derived interneurons safely mirror those associated with endogenous interneurons. Here, we combined optogenetics, interneuron-specific Cre driver mouse lines, and electrophysiology to study synaptic integration of MGE progenitors. We demonstrated that MGE-derived interneurons, when transplanted into the hippocampus of neonatal mice, migrate in the host brain, differentiate to mature inhibitory interneurons, and form appropriate synaptic connections with native pyramidal neurons. Endogenous and transplant-derived MGE progenitors preferentially formed inhibitory synaptic connections onto pyramidal neurons but not endogenous interneurons. These findings demonstrate that transplanted MGE progenitors functionally integrate into the postnatal hippocampal network

    Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

    Full text link
    Visual language grounding is widely studied in modern neural image captioning systems, which typically adopts an encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a recurrent neural network (RNN) for language caption generation. To study the robustness of language grounding to adversarial perturbations in machine vision and perception, we propose Show-and-Fool, a novel algorithm for crafting adversarial examples in neural image captioning. The proposed algorithm provides two evaluation approaches, which check whether neural image captioning systems can be mislead to output some randomly chosen captions or keywords. Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems. Consequently, our approach leads to new robustness implications of neural image captioning and novel insights in visual language grounding.Comment: Accepted by 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). Hongge Chen and Huan Zhang contribute equally to this wor

    ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

    Full text link
    Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack and significantly outperforms existing black-box attacks via substitute models.Comment: Accepted by 10th ACM Workshop on Artificial Intelligence and Security (AISEC) with the 24th ACM Conference on Computer and Communications Security (CCS

    On the Adversarial Robustness of Vision Transformers

    Full text link
    Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision. This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations. Tested on various white-box and transfer attack settings, we find that ViTs possess better adversarial robustness when compared with convolutional neural networks (CNNs). This observation also holds for certified robustness. We summarize the following main observations contributing to the improved robustness of ViTs: 1) Features learned by ViTs contain less low-level information and are more generalizable, which contributes to superior robustness against adversarial perturbations. 2) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness. 3) Increasing the proportion of transformers in the model structure (when the model consists of both transformer and CNN blocks) leads to better robustness. But for a pure transformer model, simply increasing the size or adding layers cannot guarantee a similar effect. 4) Pre-training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs. 5) Adversarial training is also applicable to ViT for training robust models. Furthermore, feature visualization and frequency analysis are conducted for explanation. The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations

    R1 in the Shaker S4 occupies the gating charge transfer center in the resting state

    Get PDF
    During voltage-dependent activation in Shaker channels, four arginine residues in the S4 segment (R1–R4) cross the transmembrane electric field. It has been proposed that R1–R4 movement is facilitated by a “gating charge transfer center” comprising a phenylalanine (F290) in S2 plus two acidic residues, one each in S2 and S3. According to this proposal, R1 occupies the charge transfer center in the resting state, defined as the conformation in which S4 is maximally retracted toward the cytoplasm. However, other evidence suggests that R1 is located extracellular to the charge transfer center, near I287 in S2, in the resting state. To investigate the resting position of R1, we mutated I287 to histidine (I287H), paired it with histidine mutations of key voltage sensor residues, and determined the effect of extracellular Zn2+ on channel activity. In I287H+R1H, Zn2+ generated a slow component of activation with a maximum amplitude (Aslow,max) of ∼56%, indicating that only a fraction of voltage sensors can bind Zn2+ at a holding potential of −80 mV. Aslow,max decreased after applying either depolarizing or hyperpolarizing prepulses from −80 mV. The decline of Aslow,max after negative prepulses indicates that R1 moves inward to abolish ion binding, going beyond the point where reorientation of the I287H and R1H side chains would reestablish a binding site. These data support the proposal that R1 occupies the charge transfer center upon hyperpolarization. Consistent with this, pairing I287H with A359H in the S3–S4 loop generated a Zn2+-binding site. At saturating concentrations, Aslow,max reached 100%, indicating that Zn2+ traps the I287H+A359H voltage sensor in an absorbing conformation. Transferring I287H+A359H into a mutant background that stabilizes the resting state significantly enhanced Zn2+ binding at −80 mV. Our results strongly support the conclusion that R1 occupies the gating charge transfer center in the resting conformation
    corecore