17 research outputs found

    Optimization of a Hydrodynamic Computational Reservoir through Evolution

    Full text link
    As demand for computational resources reaches unprecedented levels, research is expanding into the use of complex material substrates for computing. In this study, we interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir and optimize its properties using an evolution in materio approach. Input data are encoded as waves applied to our shallow water reservoir, and the readout wave height is obtained at a fixed detection point. We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm, with the objective of maximizing the system's ability to linearly separate observations in the training data by maximizing the readout matrix determinant. Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters. We also applied our approach to a regression task and show that our approach improves out-of-sample accuracy. Results from this study will inform how we interface with the physical reservoir in future work, and we will use these methods to continue to optimize other aspects of the physical implementation of this system as a computational reservoir.Comment: Accepted at the 2023 Genetic and Evolutionary Computation Conference (GECCO 2023). 9 pages, 8 figure

    Assessment and manipulation of the computational capacity of in vitro neuronal networks through criticality in neuronal avalanches

    No full text
    In this work, we report the preliminary analysis of the electrophysiological behavior of in vitro neuronal networks to identify when the networks are in a critical state based on the size distribution of network-wide avalanches of activity. The results presented here demonstrate the importance of selecting appropriate parameters in the evaluation of the size distribution and indicate that it is possible to perturb networks showing highly synchronized—or supercritical—behavior into the critical state by increasing the level of inhibition in the network. The classification of critical versus non-critical networks is valuable in identifying networks that can be expected to perform well on computational tasks, as criticality is widely considered to be the state in which a system is best suited for computation. In addition to enabling the identification of networks that are well-suited for computation, this analysis is expected to aid in the classification of networks as perturbed or healthy. This study is part of a larger research project, the overarching aim of which is to develop computational models that are able to reproduce target behaviors observed in in vitro neuronal networks. These models will ultimately be used to aid in the realization of these behaviors in nanomagnet arrays to be used in novel computing hardwares

    Hallmarks of Criticality in Neuronal Networks Depend on Cell Type and the Temporal Resolution of Neuronal Avalanches

    No full text
    The human brain has a remarkable capacity for computation, and it has been theorized that this capacity arises from the brain self-organizing into the critical state, a dynamical state poised between ordered and dis- ordered behavior and widely considered to be well-suited for computation. Criticality is commonly identified in in vitro neuronal networks using an analytical approach based on the size distribution of cascades of activity called neuronal avalanches. In this study, criticality analysis was applied to different in vitro neuronal networks with two areas of focus: evaluating the effect of the size of the time bins used for neuronal avalanche detection and observation of the development of networks of neurons derived from human induced pluripotent stem cells. This pre- preliminary study is expected to aid in the construction of models capable of emulating neuronal behaviors identified as well-suited for computation and ultimately inform the development of brain-inspired computing substrates that are better able to keep pace with increased demand for data storage and processing power

    µSpikeHunter: An advanced computational tool for the analysis of neuronal communication and action potential propagation in microfluidic platforms

    Get PDF
    Abstract Understanding neuronal communication is fundamental in neuroscience, but there are few methodologies offering detailed analysis for well-controlled conditions. By interfacing microElectrode arrays with microFluidics (μEF devices), it is possible to compartmentalize neuronal cultures with a specified alignment of axons and microelectrodes. This setup allows the extracellular recording of spike propagation with a high signal-to-noise ratio over the course of several weeks. Addressing these μEF devices, we developed an advanced yet easy-to-use publically available computational tool, μSpikeHunter, which provides a detailed quantification of several communication-related properties such as propagation velocity, conduction failure, spike timings, and coding mechanisms. The combination of μEF devices and μSpikeHunter can be used in the context of standard neuronal cultures or with co-culture configurations where, for example, communication between sensory neurons and other cell types is monitored and assessed. The ability to analyze axonal signals (in a user-friendly, time-efficient, high-throughput manner) opens the door to new approaches in studies of peripheral innervation, neural coding, and neuroregeneration, among many others. We demonstrate the use of μSpikeHunter in dorsal root ganglion neurons where we analyze the presence of both anterograde and retrograde signals in μEF devices. A fully functional version of µSpikeHunter is publically available for download from https://github.com/uSpikeHunter

    Criticality, Connectivity, and Neural Disorder: A Multifaceted Approach to Neural Computation

    No full text
    It has been hypothesized that the brain optimizes its capacity for computation by self-organizing to a critical point. The dynamical state of criticality is achieved by striking a balance such that activity can effectively spread through the network without overwhelming it and is commonly identified in neuronal networks by observing the behavior of cascades of network activity termed “neuronal avalanches.” The dynamic activity that occurs in neuronal networks is closely intertwined with how the elements of the network are connected and how they influence each other’s functional activity. In this review, we highlight how studying criticality with a broad perspective that integrates concepts from physics, experimental and theoretical neuroscience, and computer science can provide a greater understanding of the mechanisms that drive networks to criticality and how their disruption may manifest in different disorders. First, integrating graph theory into experimental studies on criticality, as is becoming more common in theoretical and modeling studies, would provide insight into the kinds of network structures that support criticality in networks of biological neurons. Furthermore, plasticity mechanisms play a crucial role in shaping these neural structures, both in terms of homeostatic maintenance and learning. Both network structures and plasticity have been studied fairly extensively in theoretical models, but much work remains to bridge the gap between theoretical and experimental findings. Finally, information theoretical approaches can tie in more concrete evidence of a network’s computational capabilities. Approaching neural dynamics with all these facets in mind has the potential to provide a greater understanding of what goes wrong in neural disorders. Criticality analysis therefore holds potential to identify disruptions to healthy dynamics, granted that robust methods and approaches are considered

    Evolving spiking neuron cellular automata and networks to emulate in vitro neuronal activity

    No full text
    Neuro-inspired models and systems have great potential for applications in unconventional computing. Often, the mechanisms of biological neurons are modeled or mimicked in simulated or physical systems in an attempt to harness some of the computational power of the brain. However, the biological mechanisms at play in neural systems are complicated and challenging to capture and engineer; thus, it can be simpler to turn to a data-driven approach to transfer features of neural behavior to artificial substrates. In the present study, we used an evolutionary algorithm to produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro. The aim of this approach was to develop a method of producing models capable of exhibiting complex behavior that may be suitable for use as computational substrates. Our models were able to produce a level of network-wide synchrony and showed a range of behaviors depending on the target data used for their evolution, which was from a range of neuronal culture densities and maturities. The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity
    corecore