85 research outputs found

    Glia-augmented artificial neural networks: foundations and applications

    Get PDF
    Information processing in the human brain has always been considered as a source of inspiration in Artificial Intelligence; in particular, it has led researchers to develop different tools such as artificial neural networks. Recent findings in Neurophysiology provide evidence that not only neurons but also isolated and networks of astrocytes are responsible for processing information in the human brain. Artificial neural net- works (ANNs) model neuron-neuron communications. Artificial neuron-glia networks (ANGN), in addition to neuron-neuron communications, model neuron-astrocyte con- nections. In continuation of the research on ANGNs, first we propose, and evaluate a model of adaptive neuro fuzzy inference systems augmented with artificial astrocytes. Then, we propose a model of ANGNs that captures the communications of astrocytes in the brain; in this model, a network of artificial astrocytes are implemented on top of a typical neural network. The results of the implementation of both networks show that on certain combinations of parameter values specifying astrocytes and their con- nections, the new networks outperform typical neural networks. This research opens a range of possibilities for future work on designing more powerful architectures of artificial neural networks that are based on more realistic models of the human brain

    Towards an integrated understanding of neural networks

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 123-136).Neural networks underpin both biological intelligence and modern Al systems, yet there is relatively little theory for how the observed behavior of these networks arises. Even the connectivity of neurons within the brain remains largely unknown, and popular deep learning algorithms lack theoretical justification or reliability guarantees. This thesis aims towards a more rigorous understanding of neural networks. We characterize and, where possible, prove essential properties of neural algorithms: expressivity, learning, and robustness. We show how observed emergent behavior can arise from network dynamics, and we develop algorithms for learning more about the network structure of the brain.by David Rolnick.Ph. D

    A new approach to Decimation in High Order Boltzmann Machines

    Get PDF
    La Màquina de Boltzmann (MB) és una xarxa neuronal estocàstica amb l'habilitat tant d'aprendre com d'extrapolar distribucions de probabilitat. Malgrat això, mai ha arribat a ser tant emprada com d'altres models de xarxa neuronal, com ara el perceptró, degut a la complexitat tan del procés de simulació com d'aprenentatge: les quantitats que es necessiten al llarg del procés d'aprenentatge són normalment estimades mitjançant tècniques Monte Carlo (MC), a través de l'algorisme del Temprat Simulat (SA). Això ha portat a una situació on la MB és més ben aviat considerada o bé com una extensió de la xarxa de Hopfield o bé com una implementació paral·lela del SA. Malgrat aquesta relativa manca d'èxit, la comunitat científica de l'àmbit de les xarxes neuronals ha mantingut un cert interès amb el model. Una de les extensions més rellevants a la MB és la Màquina de Boltzmann d'Alt Ordre (HOBM), on els pesos poden connectar més de dues neurones simultàniament. Encara que les capacitats d'aprenentatge d'aquest model han estat analitzades per d'altres autors, no s'ha pogut establir una equivalència formal entre els pesos d'una MB i els pesos d'alt ordre de la HOBM. En aquest treball s'analitza l'equivalència entre una MB i una HOBM a través de l'extensió del mètode conegut com a decimació. Decimació és una eina emprada a física estadística que es pot també aplicar a cert tipus de MB, obtenint expressions analítiques per a calcular les correlacions necessàries per a dur a terme el procés d'aprenentatge. Per tant, la decimació evita l'ús del costós algorisme del SA. Malgrat això, en la seva forma original, la decimació podia tan sols ser aplicada a cert tipus de topologies molt poc densament connectades. La extensió que es defineix en aquest treball permet calcular aquests valors independentment de la topologia de la xarxa neuronal; aquest model es basa en afegir prou pesos d'alt ordre a una MB estàndard com per a assegurar que les equacions de la decimació es poden solucionar. Després, s'estableix una equivalència directa entre els pesos d'un model d'alt ordre, la distribució de probabilitat que pot aprendre i les matrius de Hadamard: les propietats d'aquestes matrius es poden emprar per a calcular fàcilment els pesos del sistema. Finalment, es defineix una MB estàndard amb una topologia específica que permet entendre millor la equivalència exacta entre unitats ocultes de la MB i els pesos d'alt ordre de la HOBM.La Máquina de Boltzmann (MB) es una red neuronal estocástica con la habilidad de aprender y extrapolar distribuciones de probabilidad. Sin embargo, nunca ha llegado a ser tan popular como otros modelos de redes neuronals como, por ejemplo, el perceptrón. Esto es debido a la complejidad tanto del proceso de simulación como de aprendizaje: las cantidades que se necesitan a lo largo del proceso de aprendizaje se estiman mediante el uso de técnicas Monte Carlo (MC), a través del algoritmo del Temple Simulado (SA). En definitiva, la MB es generalmente considerada o bien una extensión de la red de Hopfield o bien como una implementación paralela del algoritmo del SA. Pese a esta relativa falta de éxito, la comunidad científica del ámbito de las redes neuronales ha mantenido un cierto interés en el modelo. Una importante extensión es la Màquina de Boltzmann de Alto Orden (HOBM), en la que los pesos pueden conectar más de dos neuronas a la vez. Pese a que este modelo ha sido analizado en profundidad por otros autores, todavía no se ha descrito una equivalencia formal entre los pesos de una MB i las conexiones de alto orden de una HOBM. En este trabajo se ha analizado la equivalencia entre una MB i una HOBM, a través de la extensión del método conocido como decimación. La decimación es una herramienta propia de la física estadística que también puede ser aplicada a ciertos modelos de MB, obteniendo expresiones analíticas para el cálculo de las cantidades necesarias en el algoritmo de aprendizaje. Por lo tanto, la decimación evita el alto coste computacional asociado al al uso del costoso algoritmo del SA. Pese a esto, en su forma original la decimación tan solo podía ser aplicada a ciertas topologías de MB, distinguidas por ser poco densamente conectadas. La extensión definida en este trabajo permite calcular estos valores independientemente de la topología de la red neuronal: este modelo se basa en añadir suficientes pesos de alto orden a una MB estándar como para asegurar que las ecuaciones de decimación pueden solucionarse. Más adelante, se establece una equivalencia directa entre los pesos de un modelo de alto orden, la distribución de probabilidad que puede aprender y las matrices tipo Hadamard. Las propiedades de este tipo de matrices se pueden usar para calcular fácilmente los pesos del sistema. Finalmente, se define una BM estándar con una topología específica que permite entender mejor la equivalencia exacta entre neuronas ocultas en la MB y los pesos de alto orden de la HOBM.The Boltzmann Machine (BM) is a stochastic neural network with the ability of both learning and extrapolating probability distributions. However, it has never been as widely used as other neural networks such as the perceptron, due to the complexity of both the learning and recalling algorithms, and to the high computational cost required in the learning process: the quantities that are needed at the learning stage are usually estimated by Monte Carlo (MC) through the Simulated Annealing (SA) algorithm. This has led to a situation where the BM is rather considered as an evolution of the Hopfield Neural Network or as a parallel implementation of the Simulated Annealing algorithm. Despite this relative lack of success, the neural network community has continued to progress in the analysis of the dynamics of the model. One remarkable extension is the High Order Boltzmann Machine (HOBM), where weights can connect more than two neurons at a time. Although the learning capabilities of this model have already been discussed by other authors, a formal equivalence between the weights in a standard BM and the high order weights in a HOBM has not yet been established. We analyze this latter equivalence between a second order BM and a HOBM by proposing an extension of the method known as decimation. Decimation is a common tool in statistical physics that may be applied to some kind of BMs, that can be used to obtain analytical expressions for the n-unit correlation elements required in the learning process. In this way, decimation avoids using the time consuming Simulated Annealing algorithm. However, as it was first conceived, it could only deal with sparsely connected neural networks. The extension that we define in this thesis allows computing the same quantities irrespective of the topology of the network. This method is based on adding enough high order weights to a standard BM to guarantee that the system can be solved. Next, we establish a direct equivalence between the weights of a HOBM model, the probability distribution to be learnt and Hadamard matrices. The properties of these matrices can be used to easily calculate the value of the weights of the system. Finally, we define a standard BM with a very specific topology that helps us better understand the exact equivalence between hidden units in a BM and high order weights in a HOBM

    Connectome-Constrained Artificial Neural Networks

    Get PDF
    In biological neural networks (BNNs), structure provides a set of guard rails by which function is constrained to solve tasks effectively, handle multiple stimuli simultaneously, adapt to noise and input variations, and preserve energy expenditure. Such features are desirable for artificial neural networks (ANNs), which are, unlike their organic counterparts, practically unbounded, and in many cases, initialized with random weights or arbitrary structural elements. In this dissertation, we consider an inductive base case for imposing BNN constraints onto ANNs. We select explicit connectome topologies from the fruit fly (one of the smallest BNNs) and impose these onto a multilayer perceptron (MLP) and a reservoir computer (RC), in order to craft “fruit fly neural networks” (FFNNs). We study the impact on performance, variance, and prediction dynamics from using FFNNs compared to non-FFNN models on odour classification, chaotic time-series prediction, and multifunctionality tasks. From a series of four experimental studies, we observe that the fly olfactory brain is aligned towards recalling and making predictions from chaotic input data, with a capacity for executing two mutually exclusive tasks from distinct initial conditions, and with low sensitivity to hyperparameter fluctuations that can lead to chaotic behaviour. We also observe that the clustering coefficient of the fly network, and its particular non-zero weight positions, are important for reducing model variance. These findings suggest that BNNs have distinct advantages over arbitrarily-weighted ANNs; notably, from their structure alone. More work with connectomes drawn across species will be useful in finding shared topological features which can further enhance ANNs, and Machine Learning overall

    Development of a real-time classifier for the identification of the Sit-To-Stand motion pattern

    Get PDF
    The Sit-to-Stand (STS) movement has significant importance in clinical practice, since it is an indicator of lower limb functionality. As an optimal trade-off between costs and accuracy, accelerometers have recently been used to synchronously recognise the STS transition in various Human Activity Recognition-based tasks. However, beyond the mere identification of the entire action, a major challenge remains the recognition of clinically relevant phases inside the STS motion pattern, due to the intrinsic variability of the movement. This work presents the development process of a deep-learning model aimed at recognising specific clinical valid phases in the STS, relying on a pool of 39 young and healthy participants performing the task under self-paced (SP) and controlled speed (CT). The movements were registered using a total of 6 inertial sensors, and the accelerometric data was labelised into four sequential STS phases according to the Ground Reaction Force profiles acquired through a force plate. The optimised architecture combined convolutional and recurrent neural networks into a hybrid approach and was able to correctly identify the four STS phases, both under SP and CT movements, relying on the single sensor placed on the chest. The overall accuracy estimate (median [95% confidence intervals]) for the hybrid architecture was 96.09 [95.37 - 96.56] in SP trials and 95.74 [95.39 \u2013 96.21] in CT trials. Moreover, the prediction delays ( 4533 ms) were compatible with the temporal characteristics of the dataset, sampled at 10 Hz (100 ms). These results support the implementation of the proposed model in the development of digital rehabilitation solutions able to synchronously recognise the STS movement pattern, with the aim of effectively evaluate and correct its execution

    STOCHASTIC MOBILITY MODELS IN SPACE AND TIME

    Get PDF
    An interesting fact in nature is that if we observe agents (neurons, particles, animals, humans) behaving, or more precisely moving, inside their environment, we can recognize - tough at different space or time scales - very specific patterns. The existence of those patterns is quite obvious, since not all things in nature behave totally at random, especially if we take into account thinking species like human beings. If a first phenomenon which has been deeply modeled is the gas particle motion as the template of a totally random motion, other phenomena, like foraging patterns of animals such as albatrosses, and specific instances of human mobility wear some randomness away in favor of deterministic components. Thus, while the particle motion may be satisfactorily described with a Wiener Process (also called Brownian motion), the others are better described by other kinds of stochastic processes called Levy Flights. Minding at these phenomena in a unifying way, in terms of motion of agents \u2013 either inanimate like the gas particles, or animated like the albatrosses \u2013 the point is that the latter are driven by specific interests, possibly converging into a common task, to be accomplished. The whole thesis work turns around the concept of agent intentionality at different scales, whose model may be used as key ingredient in the statistical description of complex behaviors. The two main contributions in this direction are: 1. the development of a \u201cwait and chase\u201d model of human mobility having the same two-phase pattern as animal foraging but with a greater propensity of local stays in place and therefore a less dispersed general behavior; 2. the introduction of a mobility paradigm for the neurons of a multilayer neural network and a methodology to train these new kind of networks to develop a collective behavior. The lead idea is that neurons move toward the most informative mates to better learn how to fulfill their part in the overall functionality of the network. With these specific implementations we have pursued the general goal of attributing both a cognitive and a physical meaning to the intentionality so as to be able in a near future to speak of intentionality as an additional potential in the dynamics of the masses (both at the micro and a the macro-scale), and of communication as another network in the force field. This could be intended as a step ahead in the track opened by the past century physicists with the coupling of thermodynamic and Shannon entropies in the direction of unifying cognitive and physical laws

    29th Annual Computational Neuroscience Meeting: CNS*2020

    Get PDF
    Meeting abstracts This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests. Virtual | 18-22 July 202

    Supervised and unsupervised weight and delay adaptation learning in temporal coding spiking neural networks

    Get PDF
    Artificial neural networks are learning paradigms which mimic the biological neural system. The temporal coding Spiking Neural Network, a relatively new artificial neural network paradigm, is considered to be computationally more powerful than the conventional neural network. Research on the network of spiking neurons is an emerging field and has potential for wider investigation. This research explores alternative learning models with temporal coding spiking neural networks for clustering and classification tasks. Neurons are known to be operating in two modes namely, as integrators and coincidence detectors. Previous temporal coding spiking neural networks, realising spiking neurons as integrators, were utilised for analytical studies. Temporal coding spiking neural networks applied successfully for clustering and classification tasks realised spiking neurons as coincidence detectors and encoded input in formation in the connection delays through a weight adaptation technique. These learning models select suitably delayed connections by enhancing the weights of those connections while weakening the others. This research investigates the learning in temporal coding spiking neural networks with spiking neurons as integrators and coincidence detectors. Focus is given to both supervised and unsupervised learning through weight as well as through delay adaptation. Three novel models for learning in temporal coding spiking neural networks are presented in this research. The first spiking neural network model, Self- Organising Weight Adaptation Spiking Neural Network (SOWA_SNN) realises the spiking neuron as integrator. This model adapts and encodes input information in its connection weights. The second learning model, Self-Organising Delay Adaptation Spiking Neural Network (SODA_SNN) and the third model, Super vised Delay Adaptation Spiking Neural Network (SDA_SNN) realise the spiking neuron as coincidence detector. These two models adapt the connection delays in order to detect temporal patterns through coincidence detection. The first two models were developed for clustering applications and the third for classification tasks. All three models employ Hebbian-based learning rules to update the network connection parameters by utilising the difference between the input and output spike times. The proposed temporal coding spiking neural network models were implemented as discrete models in software and their characteristics and capabilities were analysed through simulations on three bench mark data sets and a high dimensional data set. All three models were able to cluster or classify the analysed data sets efficiently with a high degree of accuracy. The performance of the proposed models, was found to be better than the existing spiking neural network models as well as conventional neural networks. The proposed learning paradigms could be applied to a wide range of applications including manufacturing, business and biomedical domains.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore