1,915 research outputs found

    Control Reconfiguration of Dynamical Systems for Improved Performance via Reverse-engineering and Forward-engineering

    Full text link
    This paper presents a control reconfiguration approach to improve the performance of two classes of dynamical systems. Motivated by recent research on re-engineering cyber-physical systems, we propose a three-step control retrofit procedure. First, we reverse-engineer a dynamical system to dig out an optimization problem it actually solves. Second, we forward-engineer the system by applying a corresponding faster algorithm to solve this optimization problem. Finally, by comparing the original and accelerated dynamics, we obtain the implementation of the redesigned part (i.e., the extra dynamics). As a result, the convergence rate/speed or transient behavior of the given system can be improved while the system control structure maintains. Three practical applications in the two different classes of target systems, including consensus in multi-agent systems, Internet congestion control and distributed proportional-integral (PI) control, show the effectiveness of the proposed approach.Comment: 27 pages, 8 figure

    Feedback Control as a Framework for Understanding Tradeoffs in Biology

    Full text link
    Control theory arose from a need to control synthetic systems. From regulating steam engines to tuning radios to devices capable of autonomous movement, it provided a formal mathematical basis for understanding the role of feedback in the stability (or change) of dynamical systems. It provides a framework for understanding any system with feedback regulation, including biological ones such as regulatory gene networks, cellular metabolic systems, sensorimotor dynamics of moving animals, and even ecological or evolutionary dynamics of organisms and populations. Here we focus on four case studies of the sensorimotor dynamics of animals, each of which involves the application of principles from control theory to probe stability and feedback in an organism's response to perturbations. We use examples from aquatic (electric fish station keeping and jamming avoidance), terrestrial (cockroach wall following) and aerial environments (flight control in moths) to highlight how one can use control theory to understand how feedback mechanisms interact with the physical dynamics of animals to determine their stability and response to sensory inputs and perturbations. Each case study is cast as a control problem with sensory input, neural processing, and motor dynamics, the output of which feeds back to the sensory inputs. Collectively, the interaction of these systems in a closed loop determines the behavior of the entire system.Comment: Submitted to Integr Comp Bio

    Centralized and distributed cognitive task processing in the human connectome

    Get PDF
    A key question in modern neuroscience is how cognitive changes in a human brain can be quantified and captured by functional connectomes (FC) . A systematic approach to measure pairwise functional distance at different brain states is lacking. This would provide a straight-forward way to quantify differences in cognitive processing across tasks; also, it would help in relating these differences in task-based FCs to the underlying structural network. Here we propose a framework, based on the concept of Jensen-Shannon divergence, to map the task-rest connectivity distance between tasks and resting-state FC. We show how this information theoretical measure allows for quantifying connectivity changes in distributed and centralized processing in functional networks. We study resting-state and seven tasks from the Human Connectome Project dataset to obtain the most distant links across tasks. We investigate how these changes are associated to different functional brain networks, and use the proposed measure to infer changes in the information processing regimes. Furthermore, we show how the FC distance from resting state is shaped by structural connectivity, and to what extent this relationship depends on the task. This framework provides a well grounded mathematical quantification of connectivity changes associated to cognitive processing in large-scale brain networks.Comment: 22 pages main, 6 pages supplementary, 6 figures, 5 supplementary figures, 1 table, 1 supplementary table. arXiv admin note: text overlap with arXiv:1710.0219

    Neural Networks: Training and Application to Nonlinear System Identification and Control

    Get PDF
    This dissertation investigates training neural networks for system identification and classification. The research contains two main contributions as follow:1. Reducing number of hidden layer nodes using a feedforward componentThis research reduces the number of hidden layer nodes and training time of neural networks to make them more suited to online identification and control applications by adding a parallel feedforward component. Implementing the feedforward component with a wavelet neural network and an echo state network provides good models for nonlinear systems.The wavelet neural network with feedforward component along with model predictive controller can reliably identify and control a seismically isolated structure during earthquake. The network model provides the predictions for model predictive control. Simulations of a 5-story seismically isolated structure with conventional lead-rubber bearings showed significant reductions of all response amplitudes for both near-field (pulse) and far-field ground motions, including reduced deformations along with corresponding reduction in acceleration response. The controller effectively regulated the apparent stiffness at the isolation level. The approach is also applied to the online identification and control of an unmanned vehicle. Lyapunov theory is used to prove the stability of the wavelet neural network and the model predictive controller. 2. Training neural networks using trajectory based optimization approachesTraining neural networks is a nonlinear non-convex optimization problem to determine the weights of the neural network. Traditional training algorithms can be inefficient and can get trapped in local minima. Two global optimization approaches are adapted to train neural networks and avoid the local minima problem. Lyapunov theory is used to prove the stability of the proposed methodology and its convergence in the presence of measurement errors. The first approach transforms the constraint satisfaction problem into unconstrained optimization. The constraints define a quotient gradient system (QGS) whose stable equilibrium points are local minima of the unconstrained optimization. The QGS is integrated to determine local minima and the local minimum with the best generalization performance is chosen as the optimal solution. The second approach uses the QGS together with a projected gradient system (PGS). The PGS is a nonlinear dynamical system, defined based on the optimization problem that searches the components of the feasible region for solutions. Lyapunov theory is used to prove the stability of PGS and QGS and their stability under presence of measurement noise

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1

    Seven properties of self-organization in the human brain

    Get PDF
    The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward
    • …
    corecore