30 research outputs found

    Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons

    Get PDF
    Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems

    Advances in Microfluidics and Lab-on-a-Chip Technologies

    Full text link
    Advances in molecular biology are enabling rapid and efficient analyses for effective intervention in domains such as biology research, infectious disease management, food safety, and biodefense. The emergence of microfluidics and nanotechnologies has enabled both new capabilities and instrument sizes practical for point-of-care. It has also introduced new functionality, enhanced sensitivity, and reduced the time and cost involved in conventional molecular diagnostic techniques. This chapter reviews the application of microfluidics for molecular diagnostics methods such as nucleic acid amplification, next-generation sequencing, high resolution melting analysis, cytogenetics, protein detection and analysis, and cell sorting. We also review microfluidic sample preparation platforms applied to molecular diagnostics and targeted to sample-in, answer-out capabilities

    Computational Capabilities of Leaky Integrate-and-Fire Neural Networks for Liquid State Machines

    Get PDF
    We analyze the computational capability of Leaky Integrate-and-Fire (LIF) Neural Networks used as a reservoir (liquid) in the framework of Liquid State Machines (LSM). Maass et. al. investigated LIF neurons in LSM and their results showed that they are capable of noise-robust, parallel, and real-time computation. However, it still remains an open question how the network topology affects the computational capability of a reservoir. To address that question, we investigate the performance of the reservoir as a function of the average reservoir connectivity. We also show that the dynamics of the LIF reservoir is sensitive to changes in the average network connectivity, which is consistent with the results taken from RBN reservoirs. Our results are relevant for understanding of the computational capabilities of reservoirs made up of biologically-realistic neuron models for real-time processing of time- varying inputs

    Supplementary Figure 5 from Immune Modulation of Innate and Adaptive Responses Restores Immune Surveillance and Establishes Antitumor Immunologic Memory

    No full text
    A. Cytolytic effect of VLV-GFP in cancer cells is mediated by the induction of pro-apoptotic genes. Treatment of cancer cells with VLV-GFP is associated with a significant increase on TRAIL and FAS mRNA expression. *=p>0.005; **= p>0.001 B. In vivo induction of pro-apoptotic genes. Mice bearing mCherry-TKO tumors were treated 2X with VLV-GFP. The expression of TRAIL and FAS was determined 24h after the last treatment. Note the significant increase on TRAIL and FAS mRNA expression following VLV-GFP treatment. ***= p>0.0001</p

    Supplementary Figure 4 from Immune Modulation of Innate and Adaptive Responses Restores Immune Surveillance and Establishes Antitumor Immunologic Memory

    No full text
    A. TKO mouse ovarian cancer cells were treated with increasing concentrations of CARG-2020 or VLV-GFP and effect on cell growth and cell death is quantified in real time using Cytation5/Biospa by measuring mCherry confluence and CelltoxTM fluorescence; B. Representative fluorescence images showing PBS Control and CARG-2020-treated cultures. red, mCherry; green, CelltoxTM; C. Human ovarian cancer cells, clone OCSC1-F2 and the hTERT-immortalized human endometrial stromal cells were treated with increasing concentrations of CARG-2020 for 72h and cell death was quantified using CelltoxTM fluorescence. Note that CARG-2020 induces cell death only in cancer cells and not in normal cells.</p
    corecore