223 research outputs found

    Social Shaping for Multi-Agent Systems

    Get PDF
    Multi-agent systems have gained attention due to advances in automation, technology, and AI. In these systems, intelligent agents collaborate through networks to achieve goals. Despite successes, multi-agent systems pose social challenges. Problems include agents finding resource prices unacceptable due to efficient allocation, interactions being cooperative/competitive, leading to varying outcomes, and sensitive data being at risk due to sharing. Problems are: 1. Price Acceptance; 2. Agent Cooperation and Competition; 3. Privacy Risks. For Price Acceptance, we address decentralized resource allocation systems as markets. We solve price acceptance in static systems with quadratic utility functions by defining allowed quadratic ranges. For dynamic systems, we present dynamic competitive equilibrium computation and propose a horizon strategy for smoothing dynamic pricing. Concerning Agent Cooperation and Competition, we study the well-known Regional Integrated Climate-Economy model (RICE). It's a dynamic game. We analyze cooperative and competitive solutions, showing impact on negotiations and consensus for regional climate action. Regarding Privacy Risks, we infer network structures from linear-quadratic game best-response dynamics to reveal agent vulnerabilities. We prove network identifiability tied to controllability conditions. A stable, sparse system identification algorithm learns network structures despite noise. Lastly, we contribute privacy-aware algorithms. We address network games where agents aggregate under differential privacy. Extending to network games, we propose a Laplace linear-quadratic functional perturbation algorithm. A tutorial example demonstrates meeting privacy needs through tuning. In summary, this thesis solves social challenges in multi-agent systems: Price Acceptance, Agent Cooperation and Competition, and Privacy Risks

    COOPERATIVE LEARNING FOR THE CONSENSUS OF MULTI-AGENT SYSTEMS

    Get PDF
    Due to a lot of attention for the multi-agent system in recent years, the consensus algorithm gained immense popularity for building fault-tolerant systems in system and control theory. Generally, the consensus algorithm drives the swarm of agents to work as a coherent group that can reach an agreement regarding a certain quantity of interest, which depends on the state of all agents themselves. The most common consensus algorithm is the average consensus, the final consensus value of which is equal to the average of the initial values. If we want the agents to find the best area of the particular resources, the average consensus will be failure. Thus the algorithm is restricted due to its incapacity to solve some optimization problems. In this dissertation, we want the agents to become more intelligent so that they can handle different optimization problems. Based on this idea, we first design a new consensus algorithm which modifies the general bat algorithm. Since bat algorithm is a swarm intelligence method and is proven to be suitable for solving the optimization problems, this modification is pretty straightforward. The optimization problem suggests the convergence direction. Also, in order to accelerate the convergence speed, we incorporate a term related to flux function, which serves as an energy/mass exchange rate in compartmental modeling or a heat transfer rate in thermodynamics. This term is inspired by the speed-up and speed-down strategy from biological swarms. We prove the stability of the proposed consensus algorithm for both linear and nonlinear flux functions in detail by the matrix paracontraction tool and the Lyapunov-based method, respectively. Another direction we are trying is to use the deep reinforcement learning to train the agent to reach the consensus state. Let the agent learn the input command by this method, they can become more intelligent without human intervention. By this method, we totally ignore the complex mathematical model in designing the protocol for the general consensus problem. The deep deterministic policy gradient algorithm is used to plan the command of the agent in the continuous domain. The moving robots systems are considered to be used to verify the effectiveness of the algorithm. Adviser: Qing Hu

    Statistical verification and differential privacy in cyber-physical systems

    Get PDF
    This thesis studies the statistical verification and differential privacy in Cyber-Physical Systems. The first part focuses on the statistical verification of stochastic hybrid system, a class of formal models for Cyber-Physical Systems. Model reduction techniques are performed on both Discrete-Time and Continuous-Time Stochastic Hybrid Systems to reduce them to Discrete-Time Markov Chains and Continuous-Time Markov Chains, respectively; and statistical verification algorithms are proposed to verify Linear Inequality LTL and Metric Interval Temporal Logic on these discrete probabilistic models. In addition, the advantage of stratified sampling in verifying Probabilistic Computation Tree Logic on Labeled Discrete-Time Markov Chains is studied; this method can potentially be extended to other statistical verification algorithms to reduce computational costs. The second part focuses on the Differential Privacy in multi-agent systems that involve share information sharing to achieve overall control goals. A general formulation of the systems and a notion of Differential Privacy are proposed, and a trade-off between the Differential Privacy and the tracking performance of the systems is demonstrated. In addition, it is proved that there is a trade-off between Differential Privacy and the entropy of the unbiased estimator of the private data, and an optimal algorithm to achieve the best trade-off is given

    Accuracy-aware privacy mechanisms for distributed computation

    Get PDF
    Distributed computing systems involve a network of devices or agents that use locally stored private information to solve a common problem. Distributed algorithms fundamentally require communication between devices leaving the system vulnerable to "privacy attacks" perpetrated by adversarial agents. In this dissertation, we focus on designing privacy-preserving distributed algorithms for -- (a) solving distributed optimization problems, (b) computing equilibrium of network aggregate games, and (c) solving a distributed system of linear equations. Specifically, we propose a privacy definition for distributed computation "non-identifiability", that allow us to simultaneously guarantee privacy and the accuracy of the computed solution. This definition involves showing that information observed by the adversary is compatible with several distributed computing problems and the associated ambiguity provides privacy. Distributed Optimization: We propose the Function Sharing strategy that involves using correlated random functions to obfuscate private objective functions followed by using a standard distributed optimization algorithm. We characterize a tight graph connectivity condition for proving privacy via non-identifiability of local objective functions. We also prove correctness of our algorithm and show that we can achieve privacy and accuracy simultaneously. Network Aggregate Games: We design a distributed Nash equilibrium computation algorithm for network aggregate games. Our algorithm uses locally balanced correlated random perturbations to hide information shared with neighbors for aggregate estimation. This step is followed by descent along the negative gradient of the local cost function. We show that if the graph of non-adversarial agents is connected and non-bipartite, then our algorithm keeps private local cost information non-identifiable while asymptotically converging to the accurate Nash equilibrium. Average Consensus and System of Linear Equations: Finally, we design a finite-time algorithm for solving the average consensus problem over directed graphs with information-theoretic privacy. We use this algorithm to solve a distributed system of linear equations in finite-time while protecting the privacy of local equations. We characterize computation, communication, memory and iteration cost of our algorithm and characterize graph conditions for guaranteeing information-theoretic privacy of local data

    Compositional analysis of networked cyber-physical systems: safety and privacy

    Get PDF
    Cyber-physical systems (CPS) are now commonplace in power grids, manufacturing, and embedded medical devices. Failures and attacks on these systems have caused signiļ¬cant social, environmental and ļ¬nancial losses. In this thesis, we develop techniques for proving invariance and privacy properties of cyber-physical systems that could aid the development of more robust and reliable systems. The thesis uses three diļ¬€erent modeling formalisms capturing diļ¬€erent aspects of CPS. Networked dynamical systems are used for modeling (possibly time-delayed) interaction of ordinary diļ¬€erential equations, such as in power system and biological networks. Labeled transition systems are used for modeling discrete communications and updates, such as in sampled data-based control systems. Finally, Markov chains are used for describing distributed cyber-physical systems that rely on randomized algorithms for communication, such as in a crowd-sourced traļ¬ƒc monitoring and routing system. Despite the diļ¬€erences in these formalisms, any model of a CPS can be viewed as a mapping from a parameter space (for example, the set of initial states) to a space of behaviors (also called trajectories or executions). In each formalism, we deļ¬ne a notion of sensitivity that captures the change in trajectories as a function of the change in the parameters. We develop approaches for approximating these sensitivity functions, which in turn are used for analysis of invariance and privacy. For proving invariance, we compute an over-approximation of reach set, which is the set of states visited by any trajectory. We introduce a notion of input-to-state (IS) discrepancy functions for components of large CPS, which roughly captures the sensitivity of the component to its initial state and input. We develop a method for constructing a reduced model of the entire system using the IS discrepancy functions. Then, we show that the trajectory of the reduced model over-approximates the sensitivity of the entire system with respect to the initial states. Using the above results we develop a sound and relatively complete algorithm for compositional invariant veriļ¬cation. In systems where distributed components take actions concurrently, there is a combinatorial explosion in the number of diļ¬€erent action sequences (or traces). We develop a partial order reduction method for computing the reach set for these systems. Our approach uses the observation that some action pairs are approximately independent, such that executing these actions in any order results in states that are close to each other. Hence a (large) set of traces can be partitioned into a (small) set of equivalent classes, where equivalent traces are derived through swapping approximately independent action pairs. We quantify the sensitivity of the system with respect to swapping approximately independent action pairs, which upper-bounds the distance between executions with equivalent traces. Finally, we develop an algorithm for precisely over-approximating the reach set of these systems that only explore a reduced set of traces. In many modern systems that allow users to share data, there exists a tension between improving the global performance and compromising user privacy. We propose a mechanism that guarantees Īµ-diļ¬€erential privacy for the participants, where each participant adds noise to its private data before sharing. The distributions of noise are speciļ¬ed by the sensitivity of the trajectory of agents to the private data. We analyze the trade-oļ¬€ between Īµ-diļ¬€erential privacy and performance, and show that the cost of diļ¬€erential privacy scales quadratically to the privacy level. The thesis illustrates that quantitative bounds on sensitivity can be used for eļ¬€ective reachability analysis, partial order reduction, and in the design of privacy preserving distributed cyber-physical systems
    • ā€¦
    corecore