41 research outputs found

    Self-selected modular recurrent neural networks with postural and inertial subnetworks applied to complex movements.

    No full text
    It has been shown that dynamic recurrent neural networks are successful in identifying the complex mapping relationship between full-wave-rectified electromyographic (EMG) signals and limb trajectories during complex movements. These connectionist models include two types of adaptive parameters: the interconnection weights between the units and the time constants associated to each neuron-like unit; they are governed by continuous-time equations. Due to their internal structure, these models are particularly appropriate to solve dynamical tasks (with time-varying input and output signals). We show in this paper that the introduction of a modular organization dedicated to different aspects of the dynamical mapping including privileged communication channels can refine the architecture of these recurrent networks. We first divide the initial individual network into two communicating subnetworks. These two modules receive the same EMG signals as input but are involved in different identification tasks related to position and acceleration. We then show that the introduction of an artificial distance in the model (using a Gaussian modulation factor of weights) induces a reduced modular architecture based on a self-elimination of null synaptic weights. Moreover, this self-selected reduced model based on two subnetworks performs the identification task better than the original single network while using fewer free parameters (better learning curve and better identification quality). We also show that this modular network exhibits several features that can be considered as biologically plausible after the learning process: self-selection of a specific inhibitory communicating path between both subnetworks after the learning process, appearance of tonic and phasic neurons, and coherent distribution of the values of the time constants within each subnetwork.Journal ArticleSCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Adaptative time constants improve the prediction capability of recurrent neural networks

    No full text
    Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we computed the power spectra of the two signals and computed the associated fractional error. Results show that the introduction of adaptative time constants associated to each neuron of a recurrent network improves the quality of the prediction and the dynamical features of a neural model. The performance of such dynamic recurrent neural networks outperform time-delay neural networks. © 1995 Kluwer Academic Publishers.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Improved identification of the human shoulder kinematics with muscle biological filters

    No full text
    In this paper, we introduce new refinements to the approach based on dynamic recurrent neural networks (DRNN) to identify, in humans, the relationship between the muscle electromyographic (EMG) activity and the arm kinematics during the drawing of the figure eight using an extended arm. This method of identification allows to clearly interpret the role of each muscle in any particular movement. We show here that the quality and the speed of the complex identification process can be improved by applying some treatments to the input signals (i.e. raw EMG signals). These treatments, applied on raw EMG signals, help to get signals that are better reflections of muscle forces which are the real actuators of the movements.SCOPUS: cp.kinfo:eu-repo/semantics/publishe

    A dynamic neural network identification of electromyography and arm trajectory relationship during complex movements.

    No full text
    We propose a new approach based on dynamic recurrent neural networks (DRNN) to identify, in human, the relationship between the muscle electromyographic (EMG) activity and the arm kinematics during the drawing of the figure eight using an extended arm. After learning, the DRNN simulations showed the efficiency of the model. We demonstrated its generalization ability to draw unlearned movements. We developed a test of its physiological plausibility by computing the error velocity vectors when small artificial lesions in the EMG signals were created. These lesion experiments demonstrated that the DRNN has identified the preferential direction of the physiological action of the studied muscles. The network also identified neural constraints such as the covariation between geometrical and kinematics parameters of the movement. This suggests that the information of raw EMG signals is largely representative of the kinematics stored in the central motor pattern. Moreover, the DRNN approach will allow one to dissociate the feedforward command (central motor pattern) and the feedback effects from muscles, skin and joints.Comparative StudyJournal ArticleResearch Support, Non-U.S. Gov'tSCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Multi-joint coordination strategies for straightening up movement in humans.

    No full text
    Complex movement execution theoretically involves numerous biomechanical degrees of freedom, leading to the concept of redundancy. The kinematics and kinetics of rapid straightening up movement from the squatting position were analysed with the optoelectronic ELITE system in 14 subjects. We found multiple acceleration and deceleration peaks for the hip, knee and ankle joints during the early extension phase of the movement. In order to test the temporal coordination between the angular acceleration of these joints, conjugate crosscorrelation functions (CCF) between each set of two variables were calculated. We found a bimodal distribution of the maximum CCF in positive and negative values suggesting the existence of two distinct strategies, the in-phase and the out-of-phase strategy for each pair of joints. The hip and knee coordination strategies (in- or out-of-phase) were well conserved in each subject for repetitive movements. Combination of joint pair strategies was more reproducible for the hip-knee/knee-ankle pair than for the other combinations, suggesting that the straightening up strategies are organised around the knee. We conclude that mastering of the redundancy problem can be realised by using coordination strategies characterised by opposed joint acceleration patterns.Journal ArticleResearch Support, Non-U.S. Gov'tinfo:eu-repo/semantics/publishe

    Emergence of clusters in the hidden layer of a dynamic recurrent neural network.

    No full text
    The neural integrator of the oculomotor system is a privileged field for artificial neural network simulation. In this paper, we were interested in an improvement of the biologically plausible features of the Arnold-Robinson network. This improvement was done by fixing the sign of the connection weights in the network (in order to respect the biological Dale's Law). We also introduced a notion of distance in the network in the form of transmission delays between its units. These modifications necessitated the introduction of a general supervisor in order to train the network to act as a leaky integrator. When examining the lateral connection weights of the hidden layer, the distribution of the weights values was found to exhibit a conspicuous structure: the high-value weights were grouped in what we call clusters. Other zones are quite flat and characterized by low-value weights. Clusters are defined as particular groups of adjoining neurons which have strong and privileged connections with another neighborhood of neurons. The clusters of the trained network are reminiscent of the small clusters or patches that have been found experimentally in the nucleus prepositus hypoglossi, where the neural integrator is located. A study was conducted to determine the conditions of emergence of these clusters in our network: they include the fixation of the weight sign, the introduction of a distance, and a convergence of the information from the hidden layer to the motoneurons. We conclude that this spontaneous emergence of clusters in artificial neural networks; performing a temporal integration, is due to computational constraints, with a restricted space of solutions. Thus, information processing could induce the emergence of iterated patterns in biological neural networks.Journal ArticleResearch Support, Non-U.S. Gov'tSCOPUS: ar.jinfo:eu-repo/semantics/publishe
    corecore