1,797,803 research outputs found

    On controllability of neuronal networks with constraints on the average of control gains

    Get PDF
    Control gains play an important role in the control of a natural or a technical system since they reflect how much resource is required to optimize a certain control objective. This paper is concerned with the controllability of neuronal networks with constraints on the average value of the control gains injected in driver nodes, which are in accordance with engineering and biological backgrounds. In order to deal with the constraints on control gains, the controllability problem is transformed into a constrained optimization problem (COP). The introduction of the constraints on the control gains unavoidably leads to substantial difficulty in finding feasible as well as refining solutions. As such, a modified dynamic hybrid framework (MDyHF) is developed to solve this COP, based on an adaptive differential evolution and the concept of Pareto dominance. By comparing with statistical methods and several recently reported constrained optimization evolutionary algorithms (COEAs), we show that our proposed MDyHF is competitive and promising in studying the controllability of neuronal networks. Based on the MDyHF, we proceed to show the controlling regions under different levels of constraints. It is revealed that we should allocate the control gains economically when strong constraints are considered. In addition, it is found that as the constraints become more restrictive, the driver nodes are more likely to be selected from the nodes with a large degree. The results and methods presented in this paper will provide useful insights into developing new techniques to control a realistic complex network efficiently

    Internal stabilization and external LpL_p stabilization of linear systems subject to constraints

    Get PDF
    Having studied during the last decade several aspects of several control design problems for linear systems subject to magnitude and rate constraints on control variables, during the last two years the research has broadened to include magnitude constraints on control variables as well as state variables. Recent work by Han et al. (2000), Hou et al. (1998) and Saberi et al. (2002) considered linear systems in a general framework for constraints including both input magnitude constraints as well as state magnitude constraints. In particular, Saberi et al. consider internal stabilization while Han et al. consider output regulation in different frameworks, namely a global, semiglobal, and regional framework. These problems require very strong solvability conditions. Therefore, a main focus for future research should focus on finding a controller with a large domain of attraction and some good rejection properties for disturbances restricted to some bounded se

    Searching for quantum optimal controls under severe constraints

    Full text link
    The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, but certain control constraints can still introduce artificial traps. Proper management of these constraints is an issue of great practical importance for numerical simulations as well as optimization in the laboratory. Using optimal control simulations, we show that constraints on quantities such as the number of control variables, the control duration, and the field strength are potentially severe enough to prevent successful optimization of the objective. For each such constraint, we show that exceeding quantifiable limits can prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.Comment: 16 pages, 7 figure

    Discrete time optimal control with frequency constraints for non-smooth systems

    Full text link
    We present a Pontryagin maximum principle for discrete time optimal control problems with (a) pointwise constraints on the control actions and the states, (b) frequency constraints on the control and the state trajectories, and (c) nonsmooth dynamical systems. Pointwise constraints on the states and the control actions represent desired and/or physical limitations on the states and the control values; such constraints are important and are widely present in the optimal control literature. Constraints of the type (b), while less standard in the literature, effectively serve the purpose of describing important spectral properties of inertial actuators and systems. The conjunction of constraints of the type (a) and (b) is a relatively new phenomenon in optimal control but are important for the synthesis control trajectories with a high degree of fidelity. The maximum principle established here provides first order necessary conditions for optimality that serve as a starting point for the synthesis of control trajectories corresponding to a large class of constrained motion planning problems that have high accuracy in a computationally tractable fashion. Moreover, the ability to handle a reasonably large class of nonsmooth dynamical systems that arise in practice ensures broad applicability our theory, and we include several illustrations of our results on standard problems

    A survey of methods of feasible directions for the solution of optimal control problems

    Get PDF
    Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category

    A Platform-Based Software Design Methodology for Embedded Control Systems: An Agile Toolkit

    No full text
    A discrete control system, with stringent hardware constraints, is effectively an embedded real-time system and hence requires a rigorous methodology to develop the software involved. The development methodology proposed in this paper adapts agile principles and patterns to support the building of embedded control systems, focusing on the issues relating to a system's constraints and safety. Strong unit testing, to ensure correctness, including the satisfaction of timing constraints, is the foundation of the proposed methodology. A platform-based design approach is used to balance costs and time-to-market in relation to performance and functionality constraints. It is concluded that the proposed methodology significantly reduces design time and costs, as well as leading to better software modularity and reliability

    Weak Dynamic Programming for Generalized State Constraints

    Full text link
    We provide a dynamic programming principle for stochastic optimal control problems with expectation constraints. A weak formulation, using test functions and a probabilistic relaxation of the constraint, avoids restrictions related to a measurable selection but still implies the Hamilton-Jacobi-Bellman equation in the viscosity sense. We treat open state constraints as a special case of expectation constraints and prove a comparison theorem to obtain the equation for closed state constraints.Comment: 36 pages;forthcoming in 'SIAM Journal on Control and Optimization
    • 

    corecore