51,952 research outputs found

    Adaptive Neural Networks for Control of Movement Trajectories Invariant under Speed and Force Rescaling

    Full text link
    This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.National Science Foundation (IRI-87-16960); Air Force Office of Scientific Research (90-0128, 90-0175

    A Multi-objective Exploratory Procedure for Regression Model Selection

    Full text link
    Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, non-linearities and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this paper introduces the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) that provides the user with an optimal set of regression models for a given data-set. The algorithm considers the regression problem as a two objective task, and explores the Pareto-optimal (best subset) models by preferring those models over the other which have less number of regression coefficients and better goodness of fit. The model exploration can be performed based on in-sample or generalization error minimization. The model selection is proposed to be performed in two steps. First, we generate the frontier of Pareto-optimal regression models by eliminating the dominated models without any user intervention. Second, a decision making process is executed which allows the user to choose the most preferred model using visualisations and simple metrics. The method has been evaluated on a recently published real dataset on Communities and Crime within United States.Comment: in Journal of Computational and Graphical Statistics, Vol. 24, Iss. 1, 201

    Inertial Load Compensation by a Model Spinal Circuit During Single Joint Movement

    Full text link
    Office of Naval Research (N00014-92-J-1309); CONACYT (Mexico) (63462

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore