3,849 research outputs found
A Multi-In and Multi-Out Dendritic Neuron Model and its Optimization
Artificial neural networks (ANNs), inspired by the interconnection of real
neurons, have achieved unprecedented success in various fields such as computer
vision and natural language processing. Recently, a novel mathematical ANN
model, known as the dendritic neuron model (DNM), has been proposed to address
nonlinear problems by more accurately reflecting the structure of real neurons.
However, the single-output design limits its capability to handle multi-output
tasks, significantly lowering its applications. In this paper, we propose a
novel multi-in and multi-out dendritic neuron model (MODN) to tackle
multi-output tasks. Our core idea is to introduce a filtering matrix to the
soma layer to adaptively select the desired dendrites to regress each output.
Because such a matrix is designed to be learnable, MODN can explore the
relationship between each dendrite and output to provide a better solution to
downstream tasks. We also model a telodendron layer into MODN to simulate
better the real neuron behavior. Importantly, MODN is a more general and
unified framework that can be naturally specialized as the DNM by customizing
the filtering matrix. To explore the optimization of MODN, we investigate both
heuristic and gradient-based optimizers and introduce a 2-step training method
for MODN. Extensive experimental results performed on 11 datasets on both
binary and multi-class classification tasks demonstrate the effectiveness of
MODN, with respect to accuracy, convergence, and generality
Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty
Abstract
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon
classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features,
obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons
most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to
the categorical axonal features. We were able to accurately predict interneuronal LBNs.
Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results
indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels.
Moreover, the introduced morphometric parameters are good
predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features
Recommended from our members
A Methodology for the Development of Recurrent Networks for Sequence Processing Tasks
Artificial neural networks are increasingly being used for dealing with real world applications. Many of these (e.g. speech recognition) are based on an ability to perform sequence processing. A class of artificial neural networks, known as recurrent networks, have architectures which incorporate feedback connections. This in turn allows the development of a memory mechanism to allow sequence processing to occur. A large number of recurrent network models have been developed, together with modifications of existing architectures and learning rules. However there has been comparatively little effort made to compare the performance of these models relative to each other. Such comparative studies would show differences in performance between networks and allow an examination of what features of a network give rise to desirable behaviours such as faster learning and superior generalisation ability. This thesis describes the results of a number of existing comparative studies and the results of new research. Three different recurrent networks, both in their original form and with modifications, are tested with four different sequence processing tasks. The results of this research clearly show that recurrent networks vary widely in terms of their performance and lead to a methodology based on the following conclusions: </br
Recommended from our members
Neural correlates of cognitive intervention in persons at risk of developing Alzheimer's disease.
Cognitive training is an emergent approach that has begun to receive increased attention in recent years as a non-pharmacological, cost-effective intervention for Alzheimer's disease (AD). There has been increasing behavioral evidence regarding training-related improvement in cognitive performance in early stages of AD. Although these studies provide important insight about the efficacy of cognitive training, neuroimaging studies are crucial to pinpoint changes in brain structure and function associated with training and to examine their overlap with pathology in AD. In this study, we reviewed the existing neuroimaging studies on cognitive training in persons at risk of developing AD to provide an overview of the overlap between neural networks rehabilitated by the current training methods and those affected in AD. The data suggest a consistent training-related increase in brain activity in medial temporal, prefrontal, and posterior default mode networks, as well as increase in gray matter structure in frontoparietal and entorhinal regions. This pattern differs from the observed pattern in healthy older adults that shows a combination of increased and decreased activity in response to training. Detailed investigation of the data suggests that training in persons at risk of developing AD mainly improves compensatory mechanisms and partly restores the affected functions. While current neuroimaging studies are quite helpful in identifying the mechanisms underlying cognitive training, the data calls for future multi-modal neuroimaging studies with focus on multi-domain cognitive training, network level connectivity, and individual differences in response to training
Neuroplastic Changes Following Brain Ischemia and their Contribution to Stroke Recovery: Novel Approaches in Neurorehabilitation
Ischemic damage to the brain triggers substantial reorganization of spared areas and pathways, which is associated with limited, spontaneous restoration of function. A better understanding of this plastic remodeling is crucial to develop more effective strategies for stroke rehabilitation. In this review article, we discuss advances in the comprehension of post-stroke network reorganization in patients and animal models. We first focus on rodent studies that have shed light on the mechanisms underlying neuronal remodeling in the perilesional area and contralesional hemisphere after motor cortex infarcts. Analysis of electrophysiological data has demonstrated brain-wide alterations in functional connectivity in both hemispheres, well beyond the infarcted area. We then illustrate the potential use of non-invasive brain stimulation (NIBS) techniques to boost recovery. We finally discuss rehabilitative protocols based on robotic devices as a tool to promote endogenous plasticity and functional restoration
Hierarchical temporal memory theory approach to stock market time series forecasting
Over the years, and with the emergence of various technological innovations, the relevance of automatic learning methods has increased exponentially, and they now play a key role in society. More specifically, Deep Learning (DL), with the ability to recognize audio, image, and time series predictions, has helped to solve various types of problems. This paper aims to introduce a new theory, Hierarchical Temporal Memory (HTM), that applies to stock market prediction. HTM is based on the biological functions of the brain as well as its learning mechanism. The results are of significant relevance and show a low percentage of errors in the predictions made over time. It can be noted that the learning curve of the algorithm is fast, identifying trends in the stock market for all seven data universes using the same network. Although the algorithm suffered at the time a pandemic was declared, it was able to adapt and return to good predictions. HTM proved to be a good continuous learning method for predicting time series datasets.This work is funded by “FCT—Fundação para a Ciência e Tecnologia” within the R&D
Units Project Scope: UIDB/00319/2020. The grant of R.S. is supported by the European Structural
and Investment Funds in the FEDER component, through the Operational Competitiveness and
Internalization Programme (COMPETE 2020). [Project n. 039479. Funding Reference: POCI-01-0247-
FEDER-039479]
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
差分進化アルゴリズムによる適応シナプスを持つ樹状ニューロンモデルに関する研究
富山大学・富理工博甲第179号・王喆・2020/9/28富山大学202
- …