16,347 research outputs found

    Distributed Optimization with Limited Communication in Networks with Adversaries

    Get PDF
    We all hope for the best but sometimes, one must plan for ways of dealing with the worst-case scenarios, especially in a network with adversaries. This dissertation illustrates a detailed description of distributed optimization algorithms over a network of agents, in which some agents are adversarial. The model considered is such that adversarial agents act to subvert the objective of the network. The algorithms presented in this dissertation are solved via gradient-based distributed optimization algorithm and the effects of the adversarial agents on the convergence of the algorithm to the optimal solution are characterized. The analyses presented establish conditions under which the adversarial agents have enough information to obstruct convergence to the optimal solution by the non-adversarial agents. The adversarial agents act by using up network bandwidth, forcing the communication of the non-adversarial agents to be constrained. A distributed gradient-based optimization algorithm is explored in which the non-adversarial agents exchange quantized information with one another using fixed and adaptive quantization scheme. Additionally, convergence of the solution to a neighborhood of the optimal solution is proved in the communication-constrained environment amidst the presence of adversarial agents

    A Formalization of Robustness for Deep Neural Networks

    Full text link
    Deep neural networks have been shown to lack robustness to small input perturbations. The process of generating the perturbations that expose the lack of robustness of neural networks is known as adversarial input generation. This process depends on the goals and capabilities of the adversary, In this paper, we propose a unifying formalization of the adversarial input generation process from a formal methods perspective. We provide a definition of robustness that is general enough to capture different formulations. The expressiveness of our formalization is shown by modeling and comparing a variety of adversarial attack techniques

    Cooperative Online Learning: Keeping your Neighbors Updated

    Full text link
    We study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. The loss function is then revealed to these agents and also to their neighbors in the network. Our results characterize how much knowing the network structure affects the regret as a function of the model of agent activations. When activations are stochastic, the optimal regret (up to constant factors) is shown to be of order αT\sqrt{\alpha T}, where TT is the horizon and α\alpha is the independence number of the network. We prove that the upper bound is achieved even when agents have no information about the network structure. When activations are adversarial the situation changes dramatically: if agents ignore the network structure, a Ω(T)\Omega(T) lower bound on the regret can be proven, showing that learning is impossible. However, when agents can choose to ignore some of their neighbors based on the knowledge of the network structure, we prove a O(χ‾T)O(\sqrt{\overline{\chi} T}) sublinear regret bound, where χ‾≥α\overline{\chi} \ge \alpha is the clique-covering number of the network

    Information Structure Design in Team Decision Problems

    Full text link
    We consider a problem of information structure design in team decision problems and team games. We propose simple, scalable greedy algorithms for adding a set of extra information links to optimize team performance and resilience to non-cooperative and adversarial agents. We show via a simple counterexample that the set function mapping additional information links to team performance is in general not supermodular. Although this implies that the greedy algorithm is not accompanied by worst-case performance guarantees, we illustrate through numerical experiments that it can produce effective and often optimal or near optimal information structure modifications

    Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets

    Full text link
    Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at http://sites.google.com/view/nips17intentionganComment: Paper accepted to NIPS 201
    • …
    corecore