119,846 research outputs found

    Learning Optimal Control of Synchronization in Networks of Coupled Oscillators using Genetic Programming-based Symbolic Regression

    Full text link
    Networks of coupled dynamical systems provide a powerful way to model systems with enormously complex dynamics, such as the human brain. Control of synchronization in such networked systems has far reaching applications in many domains, including engineering and medicine. In this paper, we formulate the synchronization control in dynamical systems as an optimization problem and present a multi-objective genetic programming-based approach to infer optimal control functions that drive the system from a synchronized to a non-synchronized state and vice-versa. The genetic programming-based controller allows learning optimal control functions in an interpretable symbolic form. The effectiveness of the proposed approach is demonstrated in controlling synchronization in coupled oscillator systems linked in networks of increasing order complexity, ranging from a simple coupled oscillator system to a hierarchical network of coupled oscillators. The results show that the proposed method can learn highly-effective and interpretable control functions for such systems.Comment: Submitted to nonlinear dynamic

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    The Design and Implementation of a Remote Automatic Control Laboratory: Using PID Control as an Example

    Get PDF
    [[abstract]]As automatic control systems are widely used in industry, the study of them is one of the most important introductory courses offered in college-level curricula. In this paper, we propose a networked learning model for automatic remote control PID experiments, including a platform and a networked learning system designed according to competence-based education methods. The online system offers a new approach to practical learning in a virtual laboratory. To evaluate the efficacy of the system, we conducted an experimental study using students enrolled in the automatic control course at Tungnan University in Taiwan. We consider three instructional methods in this paper: a traditional method, a remote learning system method, and a competence-based networked learning method. The effects of students' academic performance prior to taking the course on their achievements with regard to PID control learning are also discussed. Thirty students were randomly divided into three groups, and one instructional method was implemented for each group. The students were also divided into two groups (high and low) according to their GPA scores in the previous school year. The data were subjected to two-way ANOVA analysis, and the interaction effect between two independent variables, i.e., one of the instructional methods and the student’s performance prior to taking the course, was observed. We found that both variables have a significant effect on a student’s learning outcomes. The results show that our competence-based networked learning system is as effective as the traditional instructional method.[[notice]]補正完畢[[incitationindex]]E

    Optimal Network Control in Partially-Controllable Networks

    Full text link
    The effectiveness of many optimal network control algorithms (e.g., BackPressure) relies on the premise that all of the nodes are fully controllable. However, these algorithms may yield poor performance in a partially-controllable network where a subset of nodes are uncontrollable and use some unknown policy. Such a partially-controllable model is of increasing importance in real-world networked systems such as overlay-underlay networks. In this paper, we design optimal network control algorithms that can stabilize a partially-controllable network. We first study the scenario where uncontrollable nodes use a queue-agnostic policy, and propose a low-complexity throughput-optimal algorithm, called Tracking-MaxWeight (TMW), which enhances the original MaxWeight algorithm with an explicit learning of the policy used by uncontrollable nodes. Next, we investigate the scenario where uncontrollable nodes use a queue-dependent policy and the problem is formulated as an MDP with unknown queueing dynamics. We propose a new reinforcement learning algorithm, called Truncated Upper Confidence Reinforcement Learning (TUCRL), and prove that TUCRL achieves tunable three-way tradeoffs between throughput, delay and convergence rate

    Physics-Informed Multi-Agent Reinforcement Learning for Distributed Multi-Robot Problems

    Full text link
    The networked nature of multi-robot systems presents challenges in the context of multi-agent reinforcement learning. Centralized control policies do not scale with increasing numbers of robots, whereas independent control policies do not exploit the information provided by other robots, exhibiting poor performance in cooperative-competitive tasks. In this work we propose a physics-informed reinforcement learning approach able to learn distributed multi-robot control policies that are both scalable and make use of all the available information to each robot. Our approach has three key characteristics. First, it imposes a port-Hamiltonian structure on the policy representation, respecting energy conservation properties of physical robot systems and the networked nature of robot team interactions. Second, it uses self-attention to ensure a sparse policy representation able to handle time-varying information at each robot from the interaction graph. Third, we present a soft actor-critic reinforcement learning algorithm parameterized by our self-attention port-Hamiltonian control policy, which accounts for the correlation among robots during training while overcoming the need of value function factorization. Extensive simulations in different multi-robot scenarios demonstrate the success of the proposed approach, surpassing previous multi-robot reinforcement learning solutions in scalability, while achieving similar or superior performance (with averaged cumulative reward up to x2 greater than the state-of-the-art with robot teams x6 larger than the number of robots at training time).Comment: This paper is under review at IEEE T-R

    Analyzing controllability of dynamical systems modelling brain neural networks

    Get PDF
    The brain structure can be modelled as a deep recurrent complex neuronal network. Networked systems are expressly interesting systems to control because of the role of the underlying architecture, which predisposes some components to particular control motions. The concept of brain cognitive control is analogous to the mathematical concept of control used in engineering, where the state of a complex system can be adjusted by a particular input. The in-depth study on the controllability character of dynamical systems, despite being very difficult, could help to regulate the brain cognitive function. small advances in the study can favour the study and action against learning difficulties such as dyscalculia or other disturbances like the phenomena of forgetting.Peer ReviewedPostprint (published version
    corecore