3 research outputs found

    Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding Spiking Neural Network Facilitate Predictions of Neural Computation

    Full text link
    Despite its better bio-plausibility, goal-driven spiking neural network (SNN) has not achieved applicable performance for classifying biological spike trains, and showed little bio-functional similarities compared to traditional artificial neural networks. In this study, we proposed the motorSRNN, a recurrent SNN topologically inspired by the neural motor circuit of primates. By employing the motorSRNN in decoding spike trains from the primary motor cortex of monkeys, we achieved a good balance between classification accuracy and energy consumption. The motorSRNN communicated with the input by capturing and cultivating more cosine-tuning, an essential property of neurons in the motor cortex, and maintained its stability during training. Such training-induced cultivation and persistency of cosine-tuning was also observed in our monkeys. Moreover, the motorSRNN produced additional bio-functional similarities at the single-neuron, population, and circuit levels, demonstrating biological authenticity. Thereby, ablation studies on motorSRNN have suggested long-term stable feedback synapses contribute to the training-induced cultivation in the motor cortex. Besides these novel findings and predictions, we offer a new framework for building authentic models of neural computation

    Copyright-Certified Distillation Dataset: Distilling One Million Coins into One Bitcoin with Your Private Key

    No full text
    The rapid development of neural network dataset distillation in recent years has provided new ideas in many areas such as continuous learning, neural network architecture search and privacy preservation. Dataset distillation is a very effective method to distill large training datasets into small data, thus ensuring that the test accuracy of models trained on their synthesized small datasets matches that of models trained on the full dataset. Thus, dataset distillation itself is commercially valuable, not only for reducing training costs, but also for compressing storage costs and significantly reducing the training costs of deep learning. However, copyright protection for dataset distillation has not been proposed yet, so we propose the first method to protect intellectual property by embedding watermarks in the dataset distillation process. Our approach not only popularizes the dataset distillation technique, but also authenticates the ownership of the distilled dataset by the models trained on that distilled dataset

    Volitional Generation of Reproducible, Efficient Temporal Patterns

    No full text
    One of the extraordinary characteristics of the biological brain is the low energy expense it requires to implement a variety of biological functions and intelligence as compared to the modern artificial intelligence (AI). Spike-based energy-efficient temporal codes have long been suggested as a contributor for the brain to run on low energy expense. Despite this code having been largely reported in the sensory cortex, whether this code can be implemented in other brain areas to serve broader functions and how it evolves throughout learning have remained unaddressed. In this study, we designed a novel brain–machine interface (BMI) paradigm. Two macaques could volitionally generate reproducible energy-efficient temporal patterns in the primary motor cortex (M1) by learning the BMI paradigm. Moreover, most neurons that were not directly assigned to control the BMI did not boost their excitability, and they demonstrated an overall energy-efficient manner in performing the task. Over the course of learning, we found that the firing rates and temporal precision of selected neurons co-evolved to generate the energy-efficient temporal patterns, suggesting that a cohesive rather than dissociable processing underlies the refinement of energy-efficient temporal patterns
    corecore