178,269 research outputs found

    Minimum Violation Control Synthesis on Cyber-Physical Systems under Attacks

    Full text link
    Cyber-physical systems are conducting increasingly complex tasks, which are often modeled using formal languages such as temporal logic. The system's ability to perform the required tasks can be curtailed by malicious adversaries that mount intelligent attacks. At present, however, synthesis in the presence of such attacks has received limited research attention. In particular, the problem of synthesizing a controller when the required specifications cannot be satisfied completely due to adversarial attacks has not been studied. In this paper, we focus on the minimum violation control synthesis problem under linear temporal logic constraints of a stochastic finite state discrete-time system with the presence of an adversary. A minimum violation control strategy is one that satisfies the most important tasks defined by the user while violating the less important ones. We model the interaction between the controller and adversary using a concurrent Stackelberg game and present a nonlinear programming problem to formulate and solve for the optimal control policy. To reduce the computation effort, we develop a heuristic algorithm that solves the problem efficiently and demonstrate our proposed approach using a numerical case study

    Solution of Linear Programming Problems using a Neural Network with Non-Linear Feedback

    Get PDF
    This paper presents a recurrent neural circuit for solving linear programming problems. The objective is to minimize a linear cost function subject to linear constraints. The proposed circuit employs non-linear feedback, in the form of unipolar comparators, to introduce transcendental terms in the energy function ensuring fast convergence to the solution. The proof of validity of the energy function is also provided. The hardware complexity of the proposed circuit compares favorably with other proposed circuits for the same task. PSPICE simulation results are presented for a chosen optimization problem and are found to agree with the algebraic solution. Hardware test results for a 2–variable problem further serve to strengthen the proposed theory

    An Efficient Policy Iteration Algorithm for Dynamic Programming Equations

    Full text link
    We present an accelerated algorithm for the solution of static Hamilton-Jacobi-Bellman equations related to optimal control problems. Our scheme is based on a classic policy iteration procedure, which is known to have superlinear convergence in many relevant cases provided the initial guess is sufficiently close to the solution. In many cases, this limitation degenerates into a behavior similar to a value iteration method, with an increased computation time. The new scheme circumvents this problem by combining the advantages of both algorithms with an efficient coupling. The method starts with a value iteration phase and then switches to a policy iteration procedure when a certain error threshold is reached. A delicate point is to determine this threshold in order to avoid cumbersome computation with the value iteration and, at the same time, to be reasonably sure that the policy iteration method will finally converge to the optimal solution. We analyze the methods and efficient coupling in a number of examples in dimension two, three and four illustrating its properties
    • 

    corecore