107 research outputs found

    Consensus as a Nash Equilibrium of a Dynamic Game

    Full text link
    Consensus formation in a social network is modeled by a dynamic game of a prescribed duration played by members of the network. Each member independently minimizes a cost function that represents his/her motive. An integral cost function penalizes a member's differences of opinion from the others as well as from his/her own initial opinion, weighted by influence and stubbornness parameters. Each member uses its rate of change of opinion as a control input. This defines a dynamic non-cooperative game that turns out to have a unique Nash equilibrium. Analytic explicit expressions are derived for the opinion trajectory of each member for two representative cases obtained by suitable assumptions on the graph topology of the network. These trajectories are then examined under different assumptions on the relative sizes of the influence and stubbornness parameters that appear in the cost functions.Comment: 7 pages, 9 figure, Pre-print from the Proceedings of the 12th International Conference on Signal Image Technology and Internet-based Systems (SITIS), 201

    Sensor Fault Detection and Isolation in Autonomous Nonlinear Systems Using Neural Network-Based Observers

    Full text link
    This paper presents a new observer-based approach to detect and isolate faulty sensors in industrial systems. Two types of sensor faults are considered: complete failure and sensor deterioration. The proposed method is applicable to general autonomous nonlinear systems without making any assumptions about its triangular and/or normal form, which is usually considered in the observer design literature. The key aspect of our approach is a learning-based design of the Luenberger observer, which involves using a neural network to approximate the injective map that transforms the nonlinear system into a stable linear system with output injection. This learning-based Luenberger observer accurately estimates the system's state, allowing for the detection of sensor faults through residual generation. The residual is computed as the norm of the difference between the system's measured output and the observer's predicted output vectors. Fault isolation is achieved by comparing each sensor's measurement with its corresponding predicted value. We demonstrate the effectiveness of our approach in capturing and isolating sensor faults while remaining robust in the presence of measurement noise and system uncertainty. We validate our method through numerical simulations of sensor faults in a network of Kuramoto oscillators

    Feedback Design for Devising Optimal Epidemic Control Policies

    Full text link
    For reliable epidemic monitoring and control, this paper proposes a feedback mechanism design to effectively cope with data and model uncertainties. Using past epidemiological data, we describe methods to estimate the parameters of general epidemic models. Because the data could be noisy, the estimated parameters may not be accurate. Therefore, under uncertain parameters and noisy measurements, we provide an observer design method for robust state estimation. Then, using the estimated model and state, we devise optimal control policies by minimizing a predicted cost functional. Finally, the effectiveness of the proposed method is demonstrated through its implementation on a modified SIR epidemic model

    Secure Set-Based State Estimation for Linear Systems under Adversarial Attacks on Sensors

    Full text link
    When a strategic adversary can attack multiple sensors of a system and freely choose a different set of sensors at different times, how can we ensure that the state estimate remains uncorrupted by the attacker? The existing literature addressing this problem mandates that the adversary can only corrupt less than half of the total number of sensors. This limitation is fundamental to all point-based secure state estimators because of their dependence on algorithms that rely on majority voting among sensors. However, in reality, an adversary with ample resources may not be limited to attacking less than half of the total number of sensors. This paper avoids the above-mentioned fundamental limitation by proposing a set-based approach that allows attacks on all but one sensor at any given time. We guarantee that the true state is always contained in the estimated set, which is represented by a collection of constrained zonotopes, provided that the system is bounded-input-bounded-state stable and redundantly observable via every combination of sensor subsets with size equal to the number of uncompromised sensors. Additionally, we show that the estimated set is secure and stable irrespective of the attack signals if the process and measurement noises are bounded. To detect the set of attacked sensors at each time, we propose a simple attack detection technique. However, we acknowledge that intelligently designed stealthy attacks may not be detected and, in the worst-case scenario, could even result in exponential growth in the algorithm's complexity. We alleviate this shortcoming by presenting a range of strategies that offer different levels of trade-offs between estimation performance and complexity

    Learning-based Design of Luenberger Observers for Autonomous Nonlinear Systems

    Full text link
    The design of Luenberger observers for nonlinear systems involves state transformation to another coordinate system where the dynamics are asymptotically stable and linear up to output injection. The observer then provides a state estimate in the original coordinates by inverting the transformation map. For general nonlinear systems, however, the main challenge is to find such a transformation and to ensure that it is injective. This paper addresses this challenge by proposing a learning method that employs supervised physics-informed neural networks to approximate both the transformation and its inverse. It is shown that the proposed method exhibits better generalization capabilities than other contemporary methods. Moreover, the observer is shown to be robust under the neural network's approximation error and the system uncertainties

    Secure State Estimation against Sparse Attacks on a Time-varying Set of Sensors

    Full text link
    This paper studies the problem of secure state estimation of a linear time-invariant (LTI) system with bounded noise in the presence of sparse attacks on an unknown, time-varying set of sensors. In other words, at each time, the attacker has the freedom to choose an arbitrary set of no more that pp sensors and manipulate their measurements without restraint. To this end, we propose a secure state estimation scheme and guarantee a bounded estimation error subject to 2p2p-sparse observability and a mild, technical assumption that the system matrix has no degenerate eigenvalues. The proposed scheme comprises a design of decentralized observer for each sensor based on the local observable subspace decomposition. At each time step, the local estimates of sensors are fused by solving an optimization problem to obtain a secure estimation, which is then followed by a local detection-and-resetting process of the decentralized observers. The estimation error is shown to be upper-bounded by a constant which is determined only by the system parameters and noise magnitudes. Moreover, we optimize the detector threshold to ensure that the benign sensors do not trigger the detector. The efficacy of the proposed algorithm is demonstrated by its application on a benchmark example of IEEE 14-bus system

    Average observability of large-scale network systems

    Get PDF
    International audienceThis paper addresses observability and detectability of the average state of a network system when few gateway nodes are available. To reduce the complexity of the problem, the system is transformed to a lower dimensional state space by aggregation. The notions of average observability and average detectability are then defined, and the respective necessary and sufficient conditions are provided

    Scale-free estimation of the average state in large-scale systems

    Get PDF
    International audienceThis paper provides a computationally tractable necessary and sufficient condition for the existence of an average state observer for large-scale linear time-invariant (LTI) systems. Two design procedures, each with its own significance, are proposed. When the necessary and sufficient condition is not satisfied, a methodology is devised to obtain an optimal asymptotic estimate of the average state. In particular, the estimation problem is addressed by aggregating the unmeasured states of the original system and obtaining a projected system of reduced dimension. This approach reduces the complexity of the estimation task and yields an observer of dimension one. Moreover, it turns out that the dimension of the system also does not affect the upper bound on the estimation error

    Modeling and Control of COVID-19 Epidemic through Testing Policies

    Full text link
    Testing for the infected cases is one of the most important mechanisms to control an epidemic. It enables to isolate the detected infected individuals, thereby limiting the disease transmission to the susceptible population. However, despite the significance of testing policies, the recent literature on the subject lacks a control-theoretic perspective. In this work, an epidemic model that incorporates the testing rate as a control input is presented. The proposed model differentiates the undetected infected from the detected infected cases, who are assumed to be removed from the disease spreading process in the population. First, the model is estimated and validated for COVID-19 data in France. Then, two testing policies are proposed, the so-called best-effort strategy for testing (BEST) and constant optimal strategy for testing (COST). The BEST policy is a suppression strategy that provides a lower bound on the testing rate such that the epidemic switches from a spreading to a non-spreading state. The COST policy is a mitigation strategy that provides an optimal value of testing rate that minimizes the peak value of the infected population when the total stockpile of tests is limited. Both testing policies are evaluated by predicting the number of active intensive care unit (ICU) cases and the cumulative number of deaths due to COVID-19.Comment: 49 pages, 22 figure
    corecore