308 research outputs found

    The ELM Neuron: an Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks

    Full text link
    Traditional large-scale neuroscience models and machine learning utilize simplified models of individual neurons, relying on collective activity and properly adjusted connections to perform complex computations. However, each biological cortical neuron is inherently a sophisticated computational device, as corroborated in a recent study where it took a deep artificial neural network with millions of parameters to replicate the input-output relationship of a detailed biophysical model of a cortical pyramidal neuron. We question the necessity for these many parameters and introduce the Expressive Leaky Memory (ELM) neuron, a biologically inspired, computationally expressive, yet efficient model of a cortical neuron. Remarkably, our ELM neuron requires only 8K trainable parameters to match the aforementioned input-output relationship accurately. We find that an accurate model necessitates multiple memory-like hidden states and intricate nonlinear synaptic integration. To assess the computational ramifications of this design, we evaluate the ELM neuron on various tasks with demanding temporal structures, including a sequential version of the CIFAR-10 classification task, the challenging Pathfinder-X task, and a new dataset based on the Spiking Heidelberg Digits dataset. Our ELM neuron outperforms most transformer-based models on the Pathfinder-X task with 77% accuracy, demonstrates competitive performance on Sequential CIFAR-10, and superior performance compared to classic LSTM models on the variant of the Spiking Heidelberg Digits dataset. These findings indicate a potential for biologically motivated, computationally efficient neuronal models to enhance performance in challenging machine learning tasks.Comment: 23 pages, 10 figures, 9 tables, submitted to NeurIPS 202

    Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks

    Full text link
    Throughout history, the development of artificial intelligence, particularly artificial neural networks, has been open to and constantly inspired by the increasingly deepened understanding of the brain, such as the inspiration of neocognitron, which is the pioneering work of convolutional neural networks. Per the motives of the emerging field: NeuroAI, a great amount of neuroscience knowledge can help catalyze the next generation of AI by endowing a network with more powerful capabilities. As we know, the human brain has numerous morphologically and functionally different neurons, while artificial neural networks are almost exclusively built on a single neuron type. In the human brain, neuronal diversity is an enabling factor for all kinds of biological intelligent behaviors. Since an artificial network is a miniature of the human brain, introducing neuronal diversity should be valuable in terms of addressing those essential problems of artificial networks such as efficiency, interpretability, and memory. In this Primer, we first discuss the preliminaries of biological neuronal diversity and the characteristics of information transmission and processing in a biological neuron. Then, we review studies of designing new neurons for artificial networks. Next, we discuss what gains can neuronal diversity bring into artificial networks and exemplary applications in several important fields. Lastly, we discuss the challenges and future directions of neuronal diversity to explore the potential of NeuroAI

    Artificial Dendritic Neuron: A Model of Computation and Learning Algorithm

    Get PDF
    Dendrites are root-like extensions from the neuron cell body and have long been thought to serve as the predominant input structures of neurons. Since the early twentieth century, neuroscience research has attempted to define the dendrite’s contribution to neural computation and signal integration. This body of experimental and modeling research strongly indicates that dendrites are not just input structures but are crucial to neural processing. Dendritic processing consists of both active and passive elements that utilize the spatial, electrical and connective properties of the dendritic tree. This work presents a neuron model based around the structure and properties of dendrites. This research assesses the computational benefits and requirements of adding dendrites to a spiking artificial neuron model. A list of the computational properties of actual dendrites that have shaped this work is given. An algorithm capable of generating and training a network of dendritic neurons is created as an investigative tool through which computational challenges and attributes are explored. This work assumes that dendrites provide a necessary and beneficial function to biological intelligence (BI) and their translation into the artificial intelligence (AI) realm would broaden the capabilities and improve the realism of artificial neural network (ANN) research. To date there have been only a few instances in which neural network-based AI research has ventured beyond the point neuron; therefore, the work presented here should be viewed as exploratory. The contribution to AI made by this work is an implementation of the artificial dendritic (AD) neuron model and an algorithm for training AD neurons with spatially distributed inputs with dendrite-like connectivity

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care

    Large-scale Foundation Models and Generative AI for BigData Neuroscience

    Full text link
    Recent advances in machine learning have made revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities

    The genetic basis of inter-individual variation in recovery from traumatic brain injury.

    Get PDF
    Traumatic brain injury (TBI) is one of the leading causes of death among young people, and is increasingly prevalent in the aging population. Survivors of TBI face a spectrum of outcomes from short-term non-incapacitating injuries to long-lasting serious and deteriorating sequelae. TBI is a highly complex condition to treat; many variables can account for the observed heterogeneity in patient outcome. The limited success of neuroprotection strategies in the clinic has led to a new emphasis on neurorestorative approaches. In TBI, it is well recognized clinically that patients with similar lesions, age, and health status often display differences in recovery of function after injury. Despite this heterogeneity of outcomes in TBI, restorative treatment has remained generic. There is now a new emphasis on developing a personalized medicine approach in TBI, and this will require an improved understanding of how genetics impacts on long-term outcomes. Studies in animal model systems indicate clearly that the genetic background plays a role in determining the extent of recovery following an insult. A candidate gene approach in human studies has led to the identification of factors that can influence recovery. Here we review studies of the genetic basis for individual differences in functional recovery in the CNS in animals and man. The application of in vitro modeling with human cells and organoid cultures, along with whole-organism studies, will help to identify genes and networks that account for individual variation in recovery from brain injury, and will point the way towards the development of new therapeutic approaches

    Disentangling the molecular landscape of genetic variation of neurodevelopmental and speech disorders

    Get PDF

    Pathophysiology of aniridia-associated keratopathy: Developmental aspects and unanswered questions

    Get PDF
    Aniridia, a rare congenital disease, is often characterized by a progressive, pronounced limbal insufficiency and ocular surface pathology termed aniridia-associated keratopathy (AAK). Due to the characteristics of AAK and its bilateral nature, clinical management is challenging and complicated by the multiple coexisting ocular and systemic morbidities in aniridia. Although it is primarily assumed that AAK originates from a congenital limbal stem cell deficiency, in recent years AAK and its pathogenesis has been questioned in the light of new evidence and a refined understanding of ocular development and the biology of limbal stem cells (LSCs) and their niche. Here, by consolidating and comparing the latest clinical and preclinical evidence, we discuss key unanswered questions regarding ocular developmental aspects crucial to AAK. We also highlight hypotheses on the potential role of LSCs and the ocular surface microenvironment in AAK. The insights thus gained lead to a greater appreciation for the role of developmental and cellular processes in the emergence of AAK. They also highlight areas for future research to enable a deeper understanding of aniridia, and thereby the potential to develop new treatments for this rare but blinding ocular surface disease
    • …
    corecore