4,786 research outputs found

    Neural Networks to Diagnose the Parkinson’s Disease

    Get PDF
    To identify the presence of Parkinson’s disease, a neural network system with back propagation together with a majority voting scheme is presented in this paper. The data used has an imparity of the ratio 3:1. Previous research with regards to predict the presence of the disease has shown accuracy rates up to 92.9% [1] but it comes with a cost of reduced prediction accuracy of the small class. The designed neural network system is boosted by filtering, and this causes a significant increase of robustness. It is also shown that by majority voting of eleven parallel networks, recognition rates reached to > 90 in spite of 3:1 imbalanced class distribution of the Parkinson’s disease data set

    Language representations for generalization in reinforcement learning

    Get PDF
    The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discreteactions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of languag

    Echo State Property of Deep Reservoir Computing Networks

    Get PDF
    In the last years, the Reservoir Computing (RC) framework has emerged as a state of-the-art approach for efficient learning in temporal domains. Recently, within the RC context, deep Echo State Network (ESN) models have been proposed. Being composed of a stack of multiple non-linear reservoir layers, deep ESNs potentially allow to exploit the advantages of a hierarchical temporal feature representation at different levels of abstraction, at the same time preserving the training efficiency typical of the RC methodology. In this paper, we generalize to the case of deep architectures the fundamental RC conditions related to the Echo State Property (ESP), based on the study of stability and contractivity of the resulting dynamical system. Besides providing a necessary condition and a sufficient condition for the ESP of layered RC networks, the results of our analysis provide also insights on the nature of the state dynamics in hierarchically organized recurrent models. In particular, we find out that by adding layers to a deep reservoir architecture, the regime of network’s dynamics can only be driven towards (equally or) less stable behaviors. Moreover, our investigation shows the intrinsic ability of temporal dynamics differentiation at the different levels in a deep recurrent architecture, with higher layers in the stack characterized by less contractive dynamics. Such theoretical insights are further supported by experimental results that show the effect of layering in terms of a progressively increased short-term memory capacity of the recurrent models
    corecore