1,504 research outputs found

    Chain-structure time-delay reservoir computing for synchronizing chaotic signal and an application to secure communication

    Get PDF
    In this work, a chain-structure time-delay reservoir (CSTDR) computing, as a new kind of machine learning-based recurrent neural network, is proposed for synchronizing chaotic signals. Compared with the single time-delay reservoir, our proposed CSTDR computing shows excellent performance in synchronizing chaotic signal achieving an order of magnitude higher accuracy. Noise consideration and optimal parameter setting of the model are discussed. Taking the CSTDR computing as the core, a novel scheme of secure communication is further designed, in which the “smart” receiver is different from the traditional in that it can synchronize to the chaotic signal used for encryption in an adaptive manner. The scheme can solve the issues such as design constrains for identical dynamical systems and couplings between transmitter and receiver in conventional settings. To further manifest the practical significance of the scheme, the digital implementation using field-programmable gate array is conducted and tested experimentally with real-world examples including image and video transmission. The work sheds light on developing machine learning-based signal processing and communication applications

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    OPTIMASI ARTIFICIAL NEURAL NETWORK DENGAN GENETIC ALGORITHM PADA PREDIKSI DEBIT INFLOW WADUK SENGGURUH

    Get PDF
    Waduk merupakan salah satu sumber daya yang memiliki banyak fungsi, diantaranya sebagai sumber energi dalam pengelolaan Pembangkit Listrik Tenaga Air (PLTA), irigasi lahan pertanian, pasokan persediaan Perusahaan Air Minum dan berfungsi sebagai pencegah banjir. Dengan vitalnya peran waduk tersebut diperlukan suatu strategi pengelolaan waduk agar mendapatkan hasil yang optimal dalam pengoperasiannya, salah satunya dengan meramalkan debit inflow (debit masuk). Pada penelitian ini digunakan Jaringan Syaraf Tiruan sebagai model peramalan debit inflow dengan metode pelatihan Algoritma Genetika. Pelatihan Jaringan Syaraf Tiruan menggunakan Algoritma Genetika dilakukan dengan mengkodekan bobot dan bias jaringan kedalam kromosom dan nilai fitness  didapat dari error hasil proses feedforward. Pada penelitian ini dapat diketahui parameter genetika mampu mempengaruhi besar nilai fitness diantaranya probabilitas crossover dan jumlah generasi.Hasil dari pengujian didapatkan nilai fitness terkecil adalah 0,15

    Echo state model of non-Markovian reinforcement learning, An

    Get PDF
    Department Head: Dale H. Grit.2008 Spring.Includes bibliographical references (pages 137-142).There exists a growing need for intelligent, autonomous control strategies that operate in real-world domains. Theoretically the state-action space must exhibit the Markov property in order for reinforcement learning to be applicable. Empirical evidence, however, suggests that reinforcement learning also applies to domains where the state-action space is approximately Markovian, a requirement for the overwhelming majority of real-world domains. These domains, termed non-Markovian reinforcement learning domains, raise a unique set of practical challenges. The reconstruction dimension required to approximate a Markovian state-space is unknown a priori and can potentially be large. Further, spatial complexity of local function approximation of the reinforcement learning domain grows exponentially with the reconstruction dimension. Parameterized dynamic systems alleviate both embedding length and state-space dimensionality concerns by reconstructing an approximate Markovian state-space via a compact, recurrent representation. Yet this representation extracts a cost; modeling reinforcement learning domains via adaptive, parameterized dynamic systems is characterized by instability, slow-convergence, and high computational or spatial training complexity. The objectives of this research are to demonstrate a stable, convergent, accurate, and scalable model of non-Markovian reinforcement learning domains. These objectives are fulfilled via fixed point analysis of the dynamics underlying the reinforcement learning domain and the Echo State Network, a class of parameterized dynamic system. Understanding models of non-Markovian reinforcement learning domains requires understanding the interactions between learning domains and their models. Fixed point analysis of the Mountain Car Problem reinforcement learning domain, for both local and nonlocal function approximations, suggests a close relationship between the locality of the approximation and the number and severity of bifurcations of the fixed point structure. This research suggests the likely cause of this relationship: reinforcement learning domains exist within a dynamic feature space in which trajectories are analogous to states. The fixed point structure maps dynamic space onto state-space. This explanation suggests two testable hypotheses. Reinforcement learning is sensitive to state-space locality because states cluster as trajectories in time rather than space. Second, models using trajectory-based features should exhibit good modeling performance and few changes in fixed point structure. Analysis of performance of lookup table, feedforward neural network, and Echo State Network (ESN) on the Mountain Car Problem reinforcement learning domain confirm these hypotheses. The ESN is a large, sparse, randomly-generated, unadapted recurrent neural network, which adapts a linear projection of the target domain onto the hidden layer. ESN modeling results on reinforcement learning domains show it achieves performance comparable to lookup table and neural network architectures on the Mountain Car Problem with minimal changes to fixed point structure. Also, the ESN achieves lookup table caliber performance when modeling Acrobot, a four-dimensional control problem, but is less successful modeling the lower dimensional Modified Mountain Car Problem. These performance discrepancies are attributed to the ESN’s excellent ability to represent complex short term dynamics, and its inability to consolidate long temporal dependencies into a static memory. Without memory consolidation, reinforcement learning domains exhibiting attractors with multiple dynamic scales are unlikely to be well-modeled via ESN. To mediate this problem, a simple ESN memory consolidation method is presented and tested for stationary dynamic systems. These results indicate the potential to improve modeling performance in reinforcement learning domains via memory consolidation

    MACHINE LEARNING AUGMENTATION MICRO-SENSORS FOR SMART DEVICE APPLICATIONS

    Get PDF
    Novel smart technologies such as wearable devices and unconventional robotics have been enabled by advancements in semiconductor technologies, which have miniaturized the sizes of transistors and sensors. These technologies promise great improvements to public health. However, current computational paradigms are ill-suited for use in novel smart technologies as they fail to meet their strict power and size requirements. In this dissertation, we present two bio-inspired colocalized sensing-and-computing schemes performed at the sensor level: continuous-time recurrent neural networks (CTRNNs) and reservoir computers (RCs). These schemes arise from the nonlinear dynamics of micro-electro-mechanical systems (MEMS), which facilitates computing, and the inherent ability of MEMS devices for sensing. Furthermore, this dissertation addresses the high-voltage requirements in electrostatically actuated MEMS devices using a passive amplification scheme. The CTRNN architecture is emulated using a network of bistable MEMS devices. This bistable behavior is shown in the pull-in, the snapthrough, and the feedback regimes, when excited around the electrical resonance frequency. In these regimes, MEMS devices exhibit key behaviors found in biological neuronal populations. When coupled, networks of MEMS are shown to be successful at classification and control tasks. Moreover, MEMS accelerometers are shown to be successful at acceleration waveform classification without the need for external processors. MEMS devices are additionally shown to perform computing by utilizing the RC architecture. Here, a delay-based RC scheme is studied, which uses one MEMS device to simulate the behavior of a large neural network through input modulation. We introduce a modulation scheme that enables colocalized sensing-and-computing by modulating the bias signal. The MEMS RC is tested to successfully perform pure computation and colocalized sensing-and-computing for both classification and regression tasks, even in noisy environments. Finally, we address the high-voltage requirements of electrostatically actuated MEMS devices by proposing a passive amplification scheme utilizing the mechanical and electrical resonances of MEMS devices simultaneously. Using this scheme, an order-of-magnitude of amplification is reported. Moreover, when only electrical resonance is used, we show that the MEMS device exhibits a computationally useful bistable response. Adviser: Dr. Fadi Alsalee

    Echo State Networks: implementation and applications

    Get PDF
    Echo State Networks are a model used for supervised learning since the 2000s. This paper presents a theoretical analysis of the equations and behavior of Echo State Networks, a series of replicated experiments, the implementation of an R package for the use of Echo State Networks and some application in the field of finance.Outgoin

    Machine learning techniques to forecast non-linear trends in smart environments

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Expanding the theoretical framework of reservoir computing

    Get PDF
    • …
    corecore