1,804 research outputs found

    Optimal Attack against Cyber-Physical Control Systems with Reactive Attack Mitigation

    Full text link
    This paper studies the performance and resilience of a cyber-physical control system (CPCS) with attack detection and reactive attack mitigation. It addresses the problem of deriving an optimal sequence of false data injection attacks that maximizes the state estimation error of the system. The results provide basic understanding about the limit of the attack impact. The design of the optimal attack is based on a Markov decision process (MDP) formulation, which is solved efficiently using the value iteration method. Using the proposed framework, we quantify the effect of false positives and mis-detections on the system performance, which can help the joint design of the attack detection and mitigation. To demonstrate the use of the proposed framework in a real-world CPCS, we consider the voltage control system of power grids, and run extensive simulations using PowerWorld, a high-fidelity power system simulator, to validate our analysis. The results show that by carefully designing the attack sequence using our proposed approach, the attacker can cause a large deviation of the bus voltages from the desired setpoint. Further, the results verify the optimality of the derived attack sequence and show that, to cause maximum impact, the attacker must carefully craft his attack to strike a balance between the attack magnitude and stealthiness, due to the simultaneous presence of attack detection and mitigation

    Parameterized MDPs and Reinforcement Learning Problems -- A Maximum Entropy Principle Based Framework

    Full text link
    We present a framework to address a class of sequential decision making problems. Our framework features learning the optimal control policy with robustness to noisy data, determining the unknown state and action parameters, and performing sensitivity analysis with respect to problem parameters. We consider two broad categories of sequential decision making problems modelled as infinite horizon Markov Decision Processes (MDPs) with (and without) an absorbing state. The central idea underlying our framework is to quantify exploration in terms of the Shannon Entropy of the trajectories under the MDP and determine the stochastic policy that maximizes it while guaranteeing a low value of the expected cost along a trajectory. This resulting policy enhances the quality of exploration early on in the learning process, and consequently allows faster convergence rates and robust solutions even in the presence of noisy data as demonstrated in our comparisons to popular algorithms such as Q-learning, Double Q-learning and entropy regularized Soft Q-learning. The framework extends to the class of parameterized MDP and RL problems, where states and actions are parameter dependent, and the objective is to determine the optimal parameters along with the corresponding optimal policy. Here, the associated cost function can possibly be non-convex with multiple poor local minima. Simulation results applied to a 5G small cell network problem demonstrate successful determination of communication routes and the small cell locations. We also obtain sensitivity measures to problem parameters and robustness to noisy environment data.Comment: 17 pages, 7 figure

    A detectability criterion and data assimilation for non-linear differential equations

    Full text link
    In this paper we propose a new sequential data assimilation method for non-linear ordinary differential equations with compact state space. The method is designed so that the Lyapunov exponents of the corresponding estimation error dynamics are negative, i.e. the estimation error decays exponentially fast. The latter is shown to be the case for generic regular flow maps if and only if the observation matrix H satisfies detectability conditions: the rank of H must be at least as great as the number of nonnegative Lyapunov exponents of the underlying attractor. Numerical experiments illustrate the exponential convergence of the method and the sharpness of the theory for the case of Lorenz96 and Burgers equations with incomplete and noisy observations

    LQR based improved discrete PID controller design via optimum selection of weighting matrices using fractional order integral performance index

    Get PDF
    This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this record.The continuous and discrete time Linear Quadratic Regulator (LQR) theory has been used in this paper for the design of optimal analog and discrete PID controllers respectively. The PID controller gains are formulated as the optimal state-feedback gains, corresponding to the standard quadratic cost function involving the state variables and the controller effort. A real coded Genetic Algorithm (GA) has been used next to optimally find out the weighting matrices, associated with the respective optimal state-feedback regulator design while minimizing another time domain integral performance index, comprising of a weighted sum of Integral of Time multiplied Squared Error (ITSE) and the controller effort. The proposed methodology is extended for a new kind of fractional order (FO) integral performance indices. The impact of fractional order (as any arbitrary real order) cost function on the LQR tuned PID control loops is highlighted in the present work, along with the achievable cost of control. Guidelines for the choice of integral order of the performance index are given depending on the characteristics of the process, to be controlled.This work has been supported by the Dept. of Science & Technology (DST), Govt. of India under PURSE programme
    • …
    corecore