31 research outputs found

    Using Conceptors to Transfer Between Long-Term and Short-Term Memory

    Get PDF
    International audienceWe introduce a model of working memory combining short-term and long-term components. For the long-term component, we used Concep-tors in order to store constant temporal patterns. For the short-term component , we used the Gated-Reservoir model: a reservoir trained to hold a triggered information from an input stream and maintain it in a readout unit. We combined both components in order to obtain a model in which information can go from long-term memory to short-term memory and vice-versa

    Stability and Memory-loss go Hand-in-Hand: Three Results in Dynamics & Computation

    Get PDF
    The search for universal laws that help establish a relationship between dynamics and computation is driven by recent expansionist initiatives in biologically inspired computing. A general setting to understand both such dynamics and computation is a driven dynamical system that responds to a temporal input. Surprisingly, we find memory-loss a feature of driven systems to forget their internal states helps provide unambiguous answers to the following fundamental stability questions that have been unanswered for decades: what is necessary and sufficient so that slightly different inputs still lead to mostly similar responses? How does changing the driven system's parameters affect stability? What is the mathematical definition of the edge-of-criticality? We anticipate our results to be timely in understanding and designing biologically inspired computers that are entering an era of dedicated hardware implementations for neuromorphic computing and state-of-the-art reservoir computing applications.Comment: To appear in the Proceedings of the Royal Society of London, Series

    Change Point Detection with Conceptors

    Full text link
    Offline change point detection retrospectively locates change points in a time series. Many nonparametric methods that target i.i.d. mean and variance changes fail in the presence of nonlinear temporal dependence, and model based methods require a known, rigid structure. For the at most one change point problem, we propose use of a conceptor matrix to learn the characteristic dynamics of a baseline training window with arbitrary dependence structure. The associated echo state network acts as a featurizer of the data, and change points are identified from the nature of the interactions between the features and their relationship to the baseline state. This model agnostic method can suggest potential locations of interest that warrant further study. We prove that, under mild assumptions, the method provides a consistent estimate of the true change point, and quantile estimates are produced via a moving block bootstrap of the original data. The method is evaluated with clustering metrics and Type 1 error control on simulated data, and applied to publicly available neural data from rats experiencing bouts of non-REM sleep prior to exploration of a radial maze. With sufficient spacing, the framework provides a simple extension to the sparse, multiple change point problem
    corecore