716 research outputs found

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    The cerebellum could solve the motor error problem through error increase prediction

    Get PDF
    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error problem. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.Comment: 34 pages (without bibliography), 13 figure

    Balancing Privacy and Accuracy in IoT using Domain-Specific Features for Time Series Classification

    Get PDF
    Δ-Differential Privacy (DP) has been popularly used for anonymizing data to protect sensitive information and for machine learning (ML) tasks. However, there is a trade-off in balancing privacy and achieving ML accuracy since Δ-DP reduces the model’s accuracy for classification tasks. Moreover, not many studies have applied DP to time series from sensors and Internet-of-Things (IoT) devices. In this work, we try to achieve the accuracy of ML models trained with Δ-DP data to be as close to the ML models trained with non-anonymized data for two different physiological time series. We propose to transform time series into domain-specific 2D (image) representations such as scalograms, recurrence plots (RP), and their joint representation as inputs for training classifiers. The advantages of using these image representations render our proposed approach secure by preventing data leaks since these image transformations are irreversible. These images allow us to apply state-of-the-art image classifiers to obtain accuracy comparable to classifiers trained on non-anonymized data by ex- ploiting the additional information such as textured patterns from these images. In order to achieve classifier performance with anonymized data close to non-anonymized data, it is important to identify the value of Δ and the input feature. Experimental results demonstrate that the performance of the ML models with scalograms and RP was comparable to ML models trained on their non-anonymized versions. Motivated by the promising results, an end-to-end IoT ML edge-cloud architecture capable of detecting input drifts is designed that employs our technique to train ML models on Δ-DP physiological data. Our classification approach ensures the privacy of individuals while processing and analyzing the data at the edge securely and efficiently

    Excitatory/inhibitory balance emerges as a key factor for RBN performance, overriding attractor dynamics

    Get PDF
    Reservoir computing provides a time and cost-efficient alternative to traditional learning methods. Critical regimes, known as the “edge of chaos,” have been found to optimize computational performance in binary neural networks. However, little attention has been devoted to studying reservoir-to-reservoir variability when investigating the link between connectivity, dynamics, and performance. As physical reservoir computers become more prevalent, developing a systematic approach to network design is crucial. In this article, we examine Random Boolean Networks (RBNs) and demonstrate that specific distribution parameters can lead to diverse dynamics near critical points. We identify distinct dynamical attractors and quantify their statistics, revealing that most reservoirs possess a dominant attractor. We then evaluate performance in two challenging tasks, memorization and prediction, and find that a positive excitatory balance produces a critical point with higher memory performance. In comparison, a negative inhibitory balance delivers another critical point with better prediction performance. Interestingly, we show that the intrinsic attractor dynamics have little influence on performance in either case

    Génération et reconnaissance de rythmes au moyen de réseaux de neurones à réservoir

    Get PDF
    Les fichiers sons qui accompagne mon document sont au format midi. Le programme que nous avons dĂ©veloppĂ©s pour ce travail est en language Python.Les rĂ©seaux de neurones Ă  rĂ©servoir, dont le principe est de combiner un vaste rĂ©seau de neurones fixes avec un apprenant ne possĂ©dant aucune forme de mĂ©moire, ont rĂ©cemment connu un gain en popularitĂ© dans les communautĂ©s d’apprentissage machine, de traitement du signal et des neurosciences computationelles. Ces rĂ©seaux qui peuvent ĂȘtre classĂ©s en deux catĂ©gories : 1. les rĂ©seaux Ă  Ă©tats Ă©choĂŻques (ESN)[29] dont les activations des neurones sont des rĂ©els 2. les machines Ă  Ă©tats liquides (LSM)[43] dont les neurones possĂšdent des potentiels d’actions, ont Ă©tĂ© appliquĂ©s Ă  diffĂ©rentes tĂąches [11][64][49][45][38] dont la gĂ©nĂ©ration de sĂ©quences mĂ©lodiques [30]. Dans le cadre de la prĂ©sente recherche, nous proposons deux nouveaux modĂšles Ă  base de rĂ©seaux de neurones Ă  rĂ©servoir. Le premier est un modĂšle pour la reconnaissance de rythmes utilisant deux niveaux d’apprentissage, et avec lequel nous avons Ă©tĂ© en mesure d’obtenir des rĂ©sultats satisfaisants tant au niveau de la reconnaissance que de la rĂ©sistance au bruit. Le second modĂšle sert Ă  l’apprentissage et Ă  la gĂ©nĂ©ration de sĂ©quences pĂ©riodiques. Ce modĂšle diffĂšre du modĂšle gĂ©nĂ©ratif classique utilisĂ© avec les ESN Ă  la fois au niveau de ses entrĂ©es, puisqu’il possĂšde une Horloge, ainsi qu’au niveau de l’algorithme d’apprentissage, puisqu’il utilise un algorithme que nous avons spĂ©cialement dĂ©veloppĂ© pour cette tache et qui se nomme "Orbite". La combinaison de ces deux Ă©lĂ©ments, nous a permis d’obtenir de bons rĂ©sultats, pour la gĂ©nĂ©ration, le sur-apprentissage et l’extraction de donnĂ©es. Nous pensons Ă©galement que ce modĂšle ouvre une fenĂȘtre intĂ©ressante vers la rĂ©alisation d’un orchestre entiĂšrement virtuel et nous proposons deux architectures possibles que pourrait avoir cet orchestre. Dans la derniĂšre partie de ce travail nous prĂ©sentons les outils que nous avons dĂ©veloppĂ©s pour faciliter notre travail de recherche.Reservoir computing, the combination of a recurrent neural network and one or more memoryless readout units, has seen recent growth in popularity in and machine learning, signal processing and computational neurosciences. Reservoir-based methods have been successfully applied to a wide range of time series problems [11][64][49][45][38] including music [30], and usually can be found in two flavours: Echo States Networks(ESN)[29], where the reservoir is composed of mean rates neurons, and Liquid Sates Machines (LSM),[43] where the reservoir is composed of spiking neurons. In this work, we propose two new models based upon the ESN architecture. The first one is a model for rhythm recognition that uses two levels of learning and with which we have been able to get satisfying results on both recognition and noise resistance. The second one is a model for learning and generating periodic sequences, with this model we introduced a new architecture for generative models based upon ESNs where the reservoir receives inputs from a clock, as well as a new learning algorithm that we called "Orbite". By combining these two elements within our model, we were able to get good results on generation, over-fitting and data extraction. We also believe that a combination of several instances of our model can serve as a basis for the elaboration of an entirely virtual orchestra, and we propose two architectures that this orchestra may have. In the last part of this work, we briefly present the tools that we have developed during our research
    • 

    corecore