57 research outputs found

    A Biologically Plausible Learning Rule for Deep Learning in the Brain

    Get PDF
    Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? We demonstrate the learning scheme on classical and hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast as direct reward tasks, both for fully connected, convolutional and locally connected architectures. We show that our learning rule - Q-AGREL - performs comparably to supervised learning via error-backpropagation, with this type of trial-and-error reinforcement learning requiring only 1.5-2.5 times more epochs, even when classifying 100 different classes as in CIFAR100. Our results provide new insights into how deep learning may be implemented in the brain

    Developing a spiking neural model of Long Short-Term Memory architectures

    Get PDF
    Current advances in Deep Learning have shown significant improvements in common Machine Learning applications such as image, speech and text recognition. Specifically, in order to process time series, deep Neural Networks (NNs) with Long Short-Term Memory (LSTM) units are widely used in sequence recognition problems to store recent information and to use it for future predictions. The efficiency in data analysis, especially when big-sized data sets are involved, can be greatly improved thanks to the advancement of the ongoing research on Neural Networks (NNs) and Machine Learning for many applications in Physics. However, whenever acquisition and processing of data at different time resolutions is required, a synchronization problem for which the same piece of information is processed multiple times arises, and the advantageous efficiency of NNs, which lack the natural notion of time, ceases to exist. Spiking Neural Networks (SNNs) are the next generation of NNs that allow efficient information coding and processing by means of spikes, i.e. binary pulses propagating between neurons. In this way, information can be encoded in time, and the communication of information is activated only when the input to the neurons change, thus giving higher efficiency. In the present work, analog neurons are used for training and then they are substituted with spiking neurons in order to perform tasks. The aim for this project is to find a transfer function which allows a simple and accurate switching between analog and spiking neurons, and then to prove that the obtained network performs well in different tasks. At first, an analytical transfer function for more biologically plausible values for some neuronal parameters is derived and tested. Subsequently, the stochastic nature of the biological neurons is implemented in the neuronal model used. A new transfer function is then approximated by studying the stochastic behavior of artificial neurons, allowing to implement a simplified description for the gates and the input cell in the LSTM units. The stochastic LSTM networks are then tested on Sequence Prediction and T-Maze, i.e. typical memory-involving Machine Learning tasks, showing that almost all the resulting spiking networks correctly compute the original tasks. The main conclusion drawn from this project is that by means of a neuronal model comprising of a stochastic description of the neuron it is possible to obtain an accurate mapping from analog to spiking memory networks, which gives good results on Machine Learning tasks.Spiking neurons communicate with each other by means of a telegraph-like mechanism: a message is encoded by a neuron in binary events–called spikes–that are sent to another neuron, which decodes the incoming spikes of signal by means of the same coding originally used by the sending neuron. The problem addressed in this project then was: is it possible to make a group of such neurons remember things in the short-term but for long-enough time that they are able to solve tasks that require memory? Imagine you are driving to work through a road you have never taken before, and your task is to turn right at the next traffic light. The memory-tasks we wanted the neural networks to learn and solve are of this sort, and no spiking networks exist that can do this. With regards to this goal, the approach we opted for was to train a network of standard artificial neurons and then, once the network learned how to perform the task, we would switch the standard neurons with our modeled spiking neurons. In order to do this, of course, there are some constraints, in particular the two types of neurons (standard and spiking) have to encode signals in the same way, meaning that they need to have the same coding policy. In this project, I had to find an adequate coding policy for the spiking neurons, in order to give the same policy to a network of standard neurons and to test this superposition. Turned out, after the standard networks had learned the tasks and then switched with spiking units, the spiking neurons were indeed able to remember short-term information (such as looking for a traffic light before turning right) and to perform well in such memory tasks, allowing useful computation over time. One of the scientific fields in need of improvement is, in fact, signal processing over time. Nowadays most of the detection instruments collect signal during a time window, meaning that the signal collected in a small time range is considered as a whole, instead of detecting in continuous time. In the first case, a buffer of the history of the time windows (the information gathered before meeting the traffic light) is stored, while when information is processed in continuous time only relevant information (the time at which the traffic light is encountered) is needed. Being able to classify signals as soon as they are detected is a characteristic of asynchronous detection, an example of which is our sight, or hearing. The brain, in fact, is one of the most efficient and powerful systems existent. So why not studying a computation method inspired by the brain? Spiking neurons are exactly that: artificial units performing brain-like computation. Hence, these neurons potentially offer efficient computation and an advantageous method for continuous-time signal processing, which hopefully will be implemented in many research fields in the future

    A biologically plausible learning rule for deep learning in the brain

    Get PDF
    Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? Here, we demonstrate that these learning schemes indeed generalize to deep networks, if we include an attention network that propagates information about the selected action to lower network levels. The resulting learning rule, called Q-AGREL, is equivalent to a particular form of error-backpropagation that trains one output unit at any one time. To demonstrate the utility of the learning scheme for larger problems, we trained networks with two hidden layers on the MNIST dataset, a standard and interesting Machine Learning task. Our results demonstrate that the capability of Q-AGREL is comparable to that of error backpropagation, although the learning rate is 1.5-2 times slower because the network has to learn by trial-and-error and updates the action value of only one output unit at a time. Our results provide new insights into how deep learning can be implemented in the brain

    Gating sensory noise in a spiking subtractive LSTM

    Get PDF
    Spiking neural networks are being investigated both as biologically plausible models of neural computation and also as a potentially more efficient type of neural network. Recurrent neural networks in the form of networks of gating memory cells have been central in state-of-the-art solutions in problem domains that involve sequence recognition or generation. Here, we design an analog Long Short-Term Memory (LSTM) cell where its neurons can be substituted with efficient spiking neurons, where we use subtractive gating (following the subLSTM in [1]) instead of multiplicative gating. Subtractive gating allows for a less sensitive gating mechanism, critical when using spiking neurons. By using fast adapting spiking neurons with a smoothed Rectified Linear Unit (ReLU)-like effective activation function, we show that then an accurate conversion from an analog subLSTM to a continuous-time spiking subLSTM is possible. This architecture results in memory networks that compute very efficiently, with low average firing rates comparable to those in biological neurons, while operating in continuous time

    Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation

    Get PDF
    Much recent work has focused on biologically plausible variants of supervised learning algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons and learning in the brain depends on reward and punishment. We demonstrate a biologically plausible reinforcement learning scheme for deep networks with an arbitrary number of layers. The network chooses an action by selecting a unit in the output layer and uses feedback connections to assign credit to the units in successively lower layers that are responsible for this action. After the choice, the network receives reinforcement and there is no teacher correcting the errors. We show how the new learning scheme – Attention-Gated Brain Propagation (BrainProp) – is mathematically equivalent to error backpropagation, for one output unit at a time. We demonstrate successful learning of deep fully connected, convolutional and locally connected networks on classical and hard image-classification benchmarks; MNIST, CIFAR10, CIFAR100 andTiny ImageNet. BrainProp achieves an accuracy that is equivalent to that of standard error-backpropagation, and better than state-of-the-art biologically inspired learning schemes. Additionally, the trial-and-error nature of learning is associated with limited additional training time so that BrainProp is a factor of 1-3.5 times slower. Our results thereby provide new insights into how deep learning may be implemented in the brain

    Opsonin-deficient nucleoproteic corona endows unPEGylated liposomes with stealth properties in vivo

    Get PDF
    For several decades, surface grafted polyethylene glycol (PEG) has been a go-to strategy for preserving the synthetic identity of liposomes in physiological milieu and preventing clearance by immune cells. However, the limited clinical translation of PEGylated liposomes is mainly due to the protein corona formation and the subsequent modification of liposomes’ synthetic identity, which affects their interactions with immune cells and blood residency. Here we exploit the electric charge of DNA to generate unPEGylated liposome/DNA complexes that, upon exposure to human plasma, gets covered with an opsonin-deficient protein corona. The final product of the synthetic process is a biomimetic nanoparticle type covered by a proteonucleotidic corona, or “proteoDNAsome”, which maintains its synthetic identity in vivo and is able to slip past the immune system more efficiently than PEGylated liposomes. Accumulation of proteoDNAsomes in the spleen and the liver was lower than that of PEGylated systems. Our work highlights the importance of generating stable biomolecular coronas in the development of stealth unPEGylated particles, thus providing a connection between the biological behavior of particles in vivo and their synthetic identity

    Association of Variants in the SPTLC1 Gene With Juvenile Amyotrophic Lateral Sclerosis

    Get PDF
    Importance: Juvenile amyotrophic lateral sclerosis (ALS) is a rare form of ALS characterized by age of symptom onset less than 25 years and a variable presentation.Objective: To identify the genetic variants associated with juvenile ALS.Design, Setting, and Participants: In this multicenter family-based genetic study, trio whole-exome sequencing was performed to identify the disease-associated gene in a case series of unrelated patients diagnosed with juvenile ALS and severe growth retardation. The patients and their family members were enrolled at academic hospitals and a government research facility between March 1, 2016, and March 13, 2020, and were observed until October 1, 2020. Whole-exome sequencing was also performed in a series of patients with juvenile ALS. A total of 66 patients with juvenile ALS and 6258 adult patients with ALS participated in the study. Patients were selected for the study based on their diagnosis, and all eligible participants were enrolled in the study. None of the participants had a family history of neurological disorders, suggesting de novo variants as the underlying genetic mechanism.Main Outcomes and Measures: De novo variants present only in the index case and not in unaffected family members.Results: Trio whole-exome sequencing was performed in 3 patients diagnosed with juvenile ALS and their parents. An additional 63 patients with juvenile ALS and 6258 adult patients with ALS were subsequently screened for variants in the SPTLC1 gene. De novo variants in SPTLC1 (p.Ala20Ser in 2 patients and p.Ser331Tyr in 1 patient) were identified in 3 unrelated patients diagnosed with juvenile ALS and failure to thrive. A fourth variant (p.Leu39del) was identified in a patient with juvenile ALS where parental DNA was unavailable. Variants in this gene have been previously shown to be associated with autosomal-dominant hereditary sensory autonomic neuropathy, type 1A, by disrupting an essential enzyme complex in the sphingolipid synthesis pathway.Conclusions and Relevance: These data broaden the phenotype associated with SPTLC1 and suggest that patients presenting with juvenile ALS should be screened for variants in this gene.</p

    Allergic Rhinitis and its Impact on Asthma (ARIA) Phase 4 (2018) : Change management in allergic rhinitis and asthma multimorbidity using mobile technology

    Get PDF
    Allergic Rhinitis and its Impact on Asthma (ARIA) has evolved from a guideline by using the best approach to integrated care pathways using mobile technology in patients with allergic rhinitis (AR) and asthma multimorbidity. The proposed next phase of ARIA is change management, with the aim of providing an active and healthy life to patients with rhinitis and to those with asthma multimorbidity across the lifecycle irrespective of their sex or socioeconomic status to reduce health and social inequities incurred by the disease. ARIA has followed the 8-step model of Kotter to assess and implement the effect of rhinitis on asthma multimorbidity and to propose multimorbid guidelines. A second change management strategy is proposed by ARIA Phase 4 to increase self-medication and shared decision making in rhinitis and asthma multimorbidity. An innovation of ARIA has been the development and validation of information technology evidence-based tools (Mobile Airways Sentinel Network [MASK]) that can inform patient decisions on the basis of a self-care plan proposed by the health care professional.Peer reviewe
    corecore