525 research outputs found

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Reservoir Computing: computation with dynamical systems

    Get PDF
    In het onderzoeksgebied Machine Learning worden systemen onderzocht die kunnen leren op basis van voorbeelden. Binnen dit onderzoeksgebied zijn de recurrente neurale netwerken een belangrijke deelgroep. Deze netwerken zijn abstracte modellen van de werking van delen van de hersenen. Zij zijn in staat om zeer complexe temporele problemen op te lossen maar zijn over het algemeen zeer moeilijk om te trainen. Recentelijk zijn een aantal gelijkaardige methodes voorgesteld die dit trainingsprobleem elimineren. Deze methodes worden aangeduid met de naam Reservoir Computing. Reservoir Computing combineert de indrukwekkende rekenkracht van recurrente neurale netwerken met een eenvoudige trainingsmethode. Bovendien blijkt dat deze trainingsmethoden niet beperkt zijn tot neurale netwerken, maar kunnen toegepast worden op generieke dynamische systemen. Waarom deze systemen goed werken en welke eigenschappen bepalend zijn voor de prestatie is evenwel nog niet duidelijk. Voor dit proefschrift is onderzoek gedaan naar de dynamische eigenschappen van generieke Reservoir Computing systemen. Zo is experimenteel aangetoond dat de idee van Reservoir Computing ook toepasbaar is op niet-neurale netwerken van dynamische knopen. Verder is een maat voorgesteld die gebruikt kan worden om het dynamisch regime van een reservoir te meten. Tenslotte is een adaptatieregel geïntroduceerd die voor een breed scala reservoirtypes de dynamica van het reservoir kan afregelen tot het gewenste dynamisch regime. De technieken beschreven in dit proefschrift zijn gedemonstreerd op verschillende academische en ingenieurstoepassingen

    Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time Series Forecasting

    Full text link
    Echo state networks are computationally lightweight reservoir models inspired by the random projections observed in cortical circuitry. As interest in reservoir computing has grown, networks have become deeper and more intricate. While these networks are increasingly applied to nontrivial forecasting tasks, there is a need for comprehensive performance analysis of deep reservoirs. In this work, we study the influence of partitioning neurons given a budget and the effect of parallel reservoir pathways across different datasets exhibiting multi-scale and nonlinear dynamics.Comment: 10 pages, 10 figures, Proceedings of the Neuro-inspired Computational Elements Workshop (NICE '19), March 26-28, 2019, Albany, NY, US

    Nonlinear Black-Box Models of Digital Integrated Circuits via System Identification

    Get PDF
    This Thesis concerns the development of numerical macromodels of digi- tal Integrated Circuits input/output buffers. Such models are of paramount importance for the system-level simulation required for the assessment of Sig- nal Integrity and Electromagnetic Compatibility effects in high-performance electronic equipments via system-level simulations. In order to obtain accurate and efficient macromodels, we concentrate on the black-box modeling approach, exploiting system identification methods. The present study contributes to the systematic discussion of the IC mod- eling process, in order to obtain macromodels that can overcome strengths and limitations of the methodologies presented so far. The performances of different parametric representations, as Sigmoidal Basis Functions (SBF) ex- pansions, Echo State Networks (ESN) and Local Linear State-Space (LLSS) models are investigated. All representations have proven capabilities for the modeling of unknown nonlinear dynamic systems and are good candidates too be used for the modeling problem at hand. For each model representation, the most suitable estimation algorithm is considered and a systematic analy- sis is performed to highlight advantages and limitations. For this analysis, the modeling process is applied to a synthetic nonlinear device representative of IC ports, and designed to generate stiff responses. The tests carried out show that LLSS models provide the best overall performance for the modeling of digital devices, even with strong nonlinear dynamics. LLSS models can be estimated by means of an efficient algorithm providing a unique solution. Local stability of models is preconditioned and verified a posteriori. The effectiveness of the modeling process based on LLSS representations is verified by applying the proposed technique to the modeling of real devices involved in a realistic data communication link (an RF-to-Digital interface used in mobile phones). The obtained macromodels have been successfully used to predict both the functional signals and the power supply and ground fluctuations. Besides, they turn out to be very efficient, providing a signifi- cant simulation speed-up for the complete data link

    The dynamics of neural codes in biological and artificial neural networks

    Full text link
    Advancing our knowledge of how the brain processes information remains a key challenge in neuroscience. This thesis combines three different approaches to the study of the dynamics of neural networks and their encoding representations: a computational approach, that builds upon basic biological features of neurons and their networks to construct effective models that can simulate their structure and dynamics; a machine-learning approach, which draws a parallel with the functional capabilities of brain networks, allowing us to infer the dynamical and encoding properties required to solve certain input-processing tasks; and a final, theoretical treatment, which will take us into the fascinating hypothesis of the "critical" brain as the mathematical foundation that can explain the emergent collective properties arising from the interactions of millions of neurons. Hand in hand with physics, we venture into the realm of neuroscience to explain the existence of quasi-universal scaling properties across brain regions, setting out to quantify the distance of their dynamics from a critical point. Next, we move into the grounds of artificial intelligence, where the very same theory of critical phenomena will prove very useful for explaining the effects of biologically-inspired plasticity rules in the forecasting ability of Reservoir Computers. Halfway into our journey, we explore the concept of neural representations of external stimuli, unveiling a surprising link between the dynamical regime of neural networks and the optimal topological properties of such representation manifolds. The thesis ends with the singular problem of representational drift in the process of odor encoding carried out by the olfactory cortex, uncovering the potential synaptic plasticity mechanisms that could explain this recently observed phenomenon.Comment: A dissertation submitted to the University of Granada in partial fulfillment of the requirements for the degree of Doctor of Philosoph

    Digital Twins and Artificial Intelligence for Applications in Electric Power Distribution Systems

    Get PDF
    As modern electric power distribution systems (MEPDS) continue to grow in complexity, largely due to the ever-increasing penetration of Distributed Energy Resources (DERs), particularly solar photovoltaics (PVs) at the distribution level, there is a need to facilitate advanced operational and management tasks in the system driven by this complexity, especially in systems with high renewable penetration dependent on complex weather phenomena. Digital twins (DTs), or virtual replicas of the system and its assets, enhanced with AI paradigms can add enormous value to tasks performed by regulators, distribution system operators and energy market analysts, thereby providing cognition to the system. DTs of MEPDS assets and the system can be utilized for real-time and faster-than-real-time operational and management task support, planning studies, scenario analysis, data analytics and other distribution system studies. This study leverages DT and AI to enhance DER integration in an MEPDS as well as operational and management (O&M) tasks and distribution system studies based on a system with high PV penetration. DTs have been used to both estimate and predict the behavior of an existing 1 MW plant in Clemson University by developing asset digital twins of the physical system. Solar irradiance, temperature and wind-speed variations in the area have been modeled using physical weather stations located in and around the Clemson region to develop ten virtual weather stations. Finally, DTs of the system along with virtual and physical weather stations are used to both estimate and predict, in short time intervals, the real-time behavior of potential PV plant installations over the region. Ten virtual PV plants and three hybrid PV plants are studied, for enhanced cognition in the system. These physical, hybrid and virtual PV sources enable situational awareness and situational intelligence of real-time PV production in a distribution system

    Direct coupling of nonlinear integrated cavities for all-optical reservoir computing

    Full text link
    We consider theoretically a network of directly coupled optical microcavities to implement a space-multiplexed optical neural network in an integrated nanophotonic circuit. Nonlinear photonic network integrations based on direct coupling ensures a highly dense integration, reducing the chip footprint by several orders of magnitude compared to other implementations. Different nonlinear effects inherent to such microcavities are studied when used for realizing an all-optical autonomous computing substrate, here based on the reservoir computing concept. We provide an in-depth analysis of the impact of basic microcavity parameters on computational metrics of the system, namely, the dimensionality and the consistency. Importantly, we find that differences between frequencies and bandwidths of supermodes formed by the direct coupling is the determining factor of the reservoir's dimensionality and its scalability. The network's dimensionality can be improved with frequency-shifting nonlinear effects such as the Kerr effect, while two-photon absorption has an opposite effect. Finally, we demonstrate in simulation that the proposed reservoir is capable of solving the Mackey-Glass prediction and the optical signal recovery tasks at GHz timescale

    An overview of artificial intelligence applications for power electronics

    Get PDF
    corecore