53 research outputs found

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Memristor: A New Concept in Synchronization of Coupled Neuromorphic Circuits

    Get PDF
    The existence of the memristor, as a fourth fundamental circuit element, by researchers at Hewlett Packard (HP) labs in 2008, has attracted much interest since then. This occurs because the memristor opens up new functionalities in electronics and it has led to the interpretation of phenomena not only in electronic devices but also in biological systems. Furthermore, many research teams work on projects, which use memristors in neuromorphic devices to simulate learning, adaptive and spontaneous behavior while other teams on systems, which attempt to simulate the behavior of biological synapses. In this paper, the latest achievements and applications of this newly development circuit element are presented. Also, the basic features of neuromorphic circuits, in which the memristor can be used as an electrical synapse, are studied. In this direction, a flux-controlled memristor model is adopted for using as a coupling element between coupled electronic circuits, which simulate the behavior of neuron-cells. For this reason, the circuits which are chosen realize the systems of differential equations that simulate the well-known Hindmarsh-Rose and FitzHugh-Nagumo neuron models. Finally, the simulation results of the use of a memristor as an electric synapse present the effectiveness of the proposed method and many interesting dynamic phenomena concerning the behavior of coupled neuron-cells

    Phase Noise Analyses and Measurements in the Hybrid Memristor-CMOS Phase-Locked Loop Design and Devices Beyond Bulk CMOS

    Get PDF
    Phase-locked loop (PLLs) has been widely used in analog or mixed-signal integrated circuits. Since there is an increasing market for low noise and high speed devices, PLLs are being employed in communications. In this dissertation, we investigated phase noise, tuning range, jitter, and power performances in different architectures of PLL designs. More energy efficient devices such as memristor, graphene, transition metal di-chalcogenide (TMDC) materials and their respective transistors are introduced in the design phase-locked loop. Subsequently, we modeled phase noise of a CMOS phase-locked loop from the superposition of noises from its building blocks which comprises of a voltage-controlled oscillator, loop filter, frequency divider, phase-frequency detector, and the auxiliary input reference clock. Similarly, a linear time-invariant model that has additive noise sources in frequency domain is used to analyze the phase noise. The modeled phase noise results are further compared with the corresponding phase-locked loop designs in different n-well CMOS processes. With the scaling of CMOS technology and the increase of the electrical field, the problem of short channel effects (SCE) has become dominant, which causes decay in subthreshold slope (SS) and positive and negative shifts in the threshold voltages of nMOS and pMOS transistors, respectively. Various devices are proposed to continue extending Moore\u27s law and the roadmap in semiconductor industry. We employed tunnel field effect transistor owing to its better performance in terms of SS, leakage current, power consumption etc. Applying an appropriate bias voltage to the gate-source region of TFET causes the valence band to align with the conduction band and injecting the charge carriers. Similarly, under reverse bias, the two bands are misaligned and there is no injection of carriers. We implemented graphene TFET and MoS2 in PLL design and the results show improvements in phase noise, jitter, tuning range, and frequency of operation. In addition, the power consumption is greatly reduced due to the low supply voltage of tunnel field effect transistor

    Some Effective Tight-Binding Models for Electrons in DNA Conduction: A Review

    Get PDF
    Quantum transport for DNA conduction has widely studied with interest in application as a candidate in making nanowires as well as interest in the scientific mechanism. In this paper, we review recent works with concerning the electronic states and the conduction/transfer in DNA polymers. We have mainly investigated the energy band structure and the correlation effects of localization property in the two- and three-chain systems (ladder model) with long-range correlation as a simple model for electronic property in a double strand of DNA by using the tight-binding model. In addition, we investigated the localization properties of electronic states in several actual DNA sequences such as bacteriophages of Escherichia coli, human-chromosome 22, compared with those of the artificial disordered sequences with correlation. The charge transfer properties for poly(dA)-poly(dT) and poly(dG)-poly(dC) DNA polymers are also presented in terms of localization lengths within the frameworks of the polaron models due to the coupling between the charge carriers and the lattice vibrations of the double strand of DNA.Comment: 25 pages, 18 figure

    On the Application of PSpice for Localised Cloud Security

    Get PDF
    The work reported in this thesis commenced with a review of methods for creating random binary sequences for encoding data locally by the client before storing in the Cloud. The first method reviewed investigated evolutionary computing software which generated noise-producing functions from natural noise, a highly-speculative novel idea since noise is stochastic. Nevertheless, a function was created which generated noise to seed chaos oscillators which produced random binary sequences and this research led to a circuit-based one-time pad key chaos encoder for encrypting data. Circuit-based delay chaos oscillators, initialised with sampled electronic noise, were simulated in a linear circuit simulator called PSpice. Many simulation problems were encountered because of the nonlinear nature of chaos but were solved by creating new simulation parts, tools and simulation paradigms. Simulation data from a range of chaos sources was exported and analysed using Lyapunov analysis and identified two sources which produced one-time pad sequences with maximum entropy. This led to an encoding system which generated unlimited, infinitely-long period, unique random one-time pad encryption keys for plaintext data length matching. The keys were studied for maximum entropy and passed a suite of stringent internationally-accepted statistical tests for randomness. A prototype containing two delay chaos sources initialised by electronic noise was produced on a double-sided printed circuit board and produced more than 200 Mbits of OTPs. According to Vladimir Kotelnikov in 1941 and Claude Shannon in 1945, one-time pad sequences are theoretically-perfect and unbreakable, provided specific rules are adhered to. Two other techniques for generating random binary sequences were researched; a new circuit element, memristance was incorporated in a Chua chaos oscillator, and a fractional-order Lorenz chaos system with order less than three. Quantum computing will present many problems to cryptographic system security when existing systems are upgraded in the near future. The only existing encoding system that will resist cryptanalysis by this system is the unconditionally-secure one-time pad encryption

    Machine-Learning Methods for Computational Science and Engineering

    Get PDF
    The re-kindled fascination in machine learning (ML), observed over the last few decades, has also percolated into natural sciences and engineering. ML algorithms are now used in scientific computing, as well as in data-mining and processing. In this paper, we provide a review of the state-of-the-art in ML for computational science and engineering. We discuss ways of using ML to speed up or improve the quality of simulation techniques such as computational fluid dynamics, molecular dynamics, and structural analysis. We explore the ability of ML to produce computationally efficient surrogate models of physical applications that circumvent the need for the more expensive simulation techniques entirely. We also discuss how ML can be used to process large amounts of data, using as examples many different scientific fields, such as engineering, medicine, astronomy and computing. Finally, we review how ML has been used to create more realistic and responsive virtual reality applications

    Machine-Learning Methods for Computational Science and Engineering

    Get PDF
    The re-kindled fascination in machine learning (ML), observed over the last few decades, has also percolated into natural sciences and engineering. ML algorithms are now used in scientific computing, as well as in data-mining and processing. In this paper, we provide a review of the state-of-the-art in ML for computational science and engineering. We discuss ways of using ML to speed up or improve the quality of simulation techniques such as computational fluid dynamics, molecular dynamics, and structural analysis. We explore the ability of ML to produce computationally efficient surrogate models of physical applications that circumvent the need for the more expensive simulation techniques entirely. We also discuss how ML can be used to process large amounts of data, using as examples many different scientific fields, such as engineering, medicine, astronomy and computing. Finally, we review how ML has been used to create more realistic and responsive virtual reality applications

    Low Power Memory/Memristor Devices and Systems

    Get PDF
    This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within

    Temporal Data Analysis Using Reservoir Computing and Dynamic Memristors

    Full text link
    Temporal data analysis including classification and forecasting is essential in a range of fields from finance to engineering. While static data are largely independent of each other, temporal data have a considerable correlation between the samples, which is important for temporal data analysis. Neural networks thus offer a more general and flexible approach since they do not depend on parameters of specific tasks but are driven only by the data. In particular, recurrent neural networks have gathered much attention since the temporal information captured by the recurrent connections improves the prediction performance. Recently, reservoir computing (RC), which evolves from recurrent neural networks, has been extensively studied for temporal data analysis as it can offer efficient temporal processing of recurrent neural networks with a low training cost. This dissertation presents a hardware implementation of the RC system using an emerging device - memristor, followed by a theoretical study on hierarchical architectures of the RC system. A RC hardware system based on dynamic tungsten oxide (WOx) memristors is first demonstrated. The internal short-term memory effects of the WOx memristors allow the memristor-based reservoir to nonlinearly map temporal inputs into reservoir states, where the projected features can be readily processed by a simple linear readout function. We use the system to experimentally demonstrate two standard benchmarking tasks: isolated spoken digit recognition with partial inputs and chaotic system forecasting. High classification accuracy of 99.2% is obtained for spoken digit recognition and autonomous chaotic time series forecasting has been demonstrated over the long term. We then investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performance of the RC system. Analogous to deep neural networks, stacking sub-reservoirs in series is an efficient way to enhance the nonlinearity of data transformation to high-dimensional space and expand the diversity of temporal information captured by the reservoir. These deep reservoir systems offer better performance when compared to simply increasing the size of the reservoir or the number of sub-reservoirs. Low-frequency components are mainly captured by the sub-reservoirs in the later stages of the deep reservoir structure, similar to observations that more abstract information can be extracted by layers in the late stage of deep neural networks. When the total size of the reservoir is fixed, the tradeoff between the number of sub-reservoirs and the size of each sub-reservoir needs to be carefully considered, due to the degraded ability of the individual sub-reservoirs at small sizes. Improved performance of the deep reservoir structure alleviates the difficulty of implementing the RC system on hardware systems. Beyond temporal data classification and prediction, one of the interesting applications of temporal data analysis is inferring the neural connectivity patterns from the high-dimensional neural activity recording data. By computing the temporal correlation between the neural spikes, connections between the neurons can be inferred using statistics-based techniques, but it becomes increasingly computationally expensive for large scale neural systems. We propose a second-order memristor-based hardware system using the natively implemented spike-timing-dependent plasticity learning rule for neural connectivity inference. By incorporating biological features such as transmission delay to the neural networks, the proposed concept not only correctly infers the direct connections but also distinguishes direct connections from indirect connections. Effects of additional biophysical properties not considered in the simulation and challenges of experimental memristor implementation will be also discussed.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/167995/1/moonjohn_1.pd
    corecore