189 research outputs found

    Probabilistic and Distributed Control of a Large-Scale Swarm of Autonomous Agents

    Get PDF
    We present a novel method for guiding a large-scale swarm of autonomous agents into a desired formation shape in a distributed and scalable manner. Our Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) algorithm adopts an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled. Each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain. These time-varying Markov matrices are constructed by each agent in real-time using the feedback from the current swarm distribution, which is estimated in a distributed manner. The PSG-IMC algorithm minimizes the expected cost of the transitions per time instant, required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. We demonstrate the effectiveness of this proposed swarm guidance algorithm by using results of numerical simulations and hardware experiments with multiple quadrotors.Comment: Submitted to IEEE Transactions on Robotic

    Potential-based analysis of social, communication, and distributed networks

    Get PDF
    In recent years, there has been a wide range of studies on the role of social and distributed networks in various disciplinary areas. In particular, availability of large amounts of data from online social networks and advances in control of distributed systems have drawn the attention of many researchers to exploit the connection between evolutionary behaviors in social, communication and distributed networks. In this thesis, we first revisit several well-known types of social and distributed networks and review some relevant results from the literature. Building on this, we present a set of new results related to four different types of problems, and identify several directions for future research. The study undertaken and the approaches adopted allow us to analyze the evolution of certain types of social and distributed networks and also to identify local and global patterns of their dynamics using some novel potential-theoretic techniques. Following the introduction and preliminaries, we focus on analyzing a specific type of distributed algorithm for quantized consensus known as an unbiased quantized algorithm where a set of agents interact locally in a network in order to reach a consensus. We provide tight expressions for the expected convergence time of such dynamics over general static and time-varying networks. Following this, we introduce new protocols using a special class of Markov chains known as Metropolis chains and obtain the fastest (as of today) randomized quantized consensus protocol. The bounds provided here considerably improve the state of the art over static and dynamic networks. We make a bridge between two classes of problems, namely distributed control problems and game problems. We analyze a class of distributed averaging dynamics known as Hegselmann-Krause opinion dynamics. Modeling such dynamics as a non-cooperative game problem, we elaborate on some of the evolutionary properties of such dynamics. In particular, we answer an open question related to the termination time of such dynamics by connecting the convergence time to the spectral gap of the adjacency matrices of underlying dynamics. This not only allows us to improve the best known upper bound, but also removes the dependency of termination time from the dimension of the ambient space. The approach adopted here can also be leveraged to connect the rate of increase of a so-called kinetic-s-energy associated with multi-agent systems to the spectral gap of their underlying dynamics. We describe a richer class of distributed systems where the agents involved in the network act in a more strategic manner. More specifically, we consider a class of resource allocation games over networks and study their evolution to some final outcomes such as Nash equilibria. We devise some simple distributed algorithms which drive the entire network to a Nash equilibrium in polynomial time for dense and hierarchical networks. In particular, we show that such games benefit from having low price of anarchy, and hence, can be used to model allocation systems which suffer from lack of coordination. This fact allows us to devise a distributed approximation algorithm within a constant gap of any pure-strategy Nash equilibrium over general networks. Subsequently we turn our attention to an important problem related to competition over social networks. We establish a hardness result for searching an equilibrium over a class of games known as competitive diffusion games, and provide some necessary conditions for existence of a pure-strategy Nash equilibrium in such games. In particular, we provide some concentration results related to the expected utility of the players over random graphs. Finally, we discuss some future directions by identifying several interesting problems and justify the importance of the underlying problems

    Asynchronous Communication under Reliable and Unreliable Network Topologies in Distributed Multiagent Systems: A Robust Technique for Computing Average Consensus

    Get PDF
    Nearly all applications in multiagent systems demand precision, robustness, consistency, and rapid convergence in designing of distributed consensus algorithms. Keeping this thing in our sight, this research suggests a robust consensus protocol for distributed multiagent networks, continuing asynchronous communications, where agent’s states values are updated at diverse interval of time. This paper presents an asynchronous communication for both reliable and unreliable network topologies. The primary goal is to delineate local control inputs to attain time synchronization by processing the update information received by the agents associated in a communication topology. Additionally in order to accomplish the robust convergence, modelling of convergence analysis is conceded by commissioning the basic principles of graph and matrix theory alongside the suitable lemmas. Moreover, statistical examples presenting four diverse scenarios are provided in the end; produced results are the recognisable indicator to authenticate the robust effectiveness of the proposed algorithm. Likewise, a simulation comparison of the projected algorithm with the other existing approaches is conducted, considering different performance parameters are being carried out to support our claim

    Consensus with Linear Objective Maps

    Full text link
    A consensus system is a linear multi-agent system in which agents communicate to reach a so-called consensus state, defined as the average of the initial states of the agents. Consider a more generalized situation in which each agent is given a positive weight and the consensus state is defined as the weighted average of the initial conditions. We characterize in this paper the weighted averages that can be evaluated in a decentralized way by agents communicating over a directed graph. Specifically, we introduce a linear function, called the objective map, that defines the desired final state as a function of the initial states of the agents. We then provide a complete answer to the question of whether there is a decentralized consensus dynamics over a given digraph which converges to the final state specified by an objective map. In particular, we characterize not only the set of objective maps that are feasible for a given digraph, but also the consensus dynamics that implements the objective map. In addition, we present a decentralized algorithm to design the consensus dynamics

    Bayesian plug & play methods for inverse problems in imaging.

    Get PDF
    Thèse de Doctorat de Mathématiques Appliquées (Université de Paris)Tesis de Doctorado en Ingeniería Eléctrica (Universidad de la República)This thesis deals with Bayesian methods for solving ill-posed inverse problems in imaging with learnt image priors. The first part of this thesis (Chapter 3) concentrates on two particular problems, namely joint denoising and decompression and multi-image super-resolution. After an extensive study of the noise statistics for these problem in the transformed (wavelet or Fourier) domain, we derive two novel algorithms to solve this particular inverse problem. One of them is based on a multi-scale self-similarity prior and can be seen as a transform-domain generalization of the celebrated non-local bayes algorithm to the case of non-Gaussian noise. The second one uses a neural-network denoiser to implicitly encode the image prior, and a splitting scheme to incorporate this prior into an optimization algorithm to find a MAP-like estimator. The second part of this thesis concentrates on the Variational AutoEncoder (VAE) model and some of its variants that show its capabilities to explicitly capture the probability distribution of high-dimensional datasets such as images. Based on these VAE models, we propose two ways to incorporate them as priors for general inverse problems in imaging : • The first one (Chapter 4) computes a joint (space-latent) MAP estimator named Joint Posterior Maximization using an Autoencoding Prior (JPMAP). We show theoretical and experimental evidence that the proposed objective function satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima. • The second one (Chapter 5) develops a Gibbs-like posterior sampling algorithm for the exploration of posterior distributions of inverse problems using multiple chains and a VAE as image prior. We showhowto use those samples to obtain MMSE estimates and their corresponding uncertainty.Cette thèse traite des méthodes bayésiennes pour résoudre des problèmes inverses mal posés en imagerie avec des distributions a priori d’images apprises. La première partie de cette thèse (Chapitre 3) se concentre sur deux problèmes partic-uliers, à savoir le débruitage et la décompression conjoints et la super-résolutionmulti-images. Après une étude approfondie des statistiques de bruit pour ces problèmes dans le domaine transformé (ondelettes ou Fourier), nous dérivons deuxnouveaux algorithmes pour résoudre ce problème inverse particulie. L’un d’euxest basé sur une distributions a priori d’auto-similarité multi-échelle et peut êtrevu comme une généralisation du célèbre algorithme de Non-Local Bayes au cas dubruit non gaussien. Le second utilise un débruiteur de réseau de neurones pourcoder implicitement la distribution a priori, et un schéma de division pour incor-porer cet distribution dans un algorithme d’optimisation pour trouver un estima-teur de type MAP. La deuxième partie de cette thèse se concentre sur le modèle Variational Auto Encoder (VAE) et certaines de ses variantes qui montrent ses capacités à capturer explicitement la distribution de probabilité d’ensembles de données de grande dimension tels que les images. Sur la base de ces modèles VAE, nous proposons deuxmanières de les incorporer comme distribution a priori pour les problèmes inverses généraux en imagerie: •Le premier (Chapitre 4) calcule un estimateur MAP conjoint (espace-latent) nommé Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Nous montrons des preuves théoriques et expérimentales que la fonction objectif proposée satisfait une propriété de bi-convexité faible qui est suffisante pour garantir que notre schéma d’optimisation converge vers un pointstationnaire. Les résultats expérimentaux montrent également la meilleurequalité des solutions obtenues par notre approche JPMAP par rapport à d’autresapproches MAP non convexes qui restent le plus souvent bloquées dans desminima locaux. •Le second (Chapitre 5) développe un algorithme d’échantillonnage a poste-riori de type Gibbs pour l’exploration des distributions a posteriori de problèmes inverses utilisant des chaînes multiples et un VAE comme distribution a priori. Nous montrons comment utiliser ces échantillons pour obtenir desestimations MMSE et leur incertitude correspondante.En esta tesis se estudian métodos bayesianos para resolver problemas inversos mal condicionados en imágenes usando distribuciones a priori entrenadas. La primera parte de esta tesis (Capítulo 3) se concentra en dos problemas particulares, a saber, el de eliminación de ruido y descompresión conjuntos, y el de superresolución a partir de múltiples imágenes. Después de un extenso estudio de las estadísticas del ruido para estos problemas en el dominio transformado (wavelet o Fourier),derivamos dos algoritmos nuevos para resolver este problema inverso en particular. Uno de ellos se basa en una distribución a priori de autosimilitud multiescala y puede verse como una generalización al dominio wavelet del célebre algoritmo Non-Local Bayes para el caso de ruido no Gaussiano. El segundo utiliza un algoritmo de eliminación de ruido basado en una red neuronal para codificar implícitamente la distribución a priori de las imágenes y un esquema de relajación para incorporar esta distribución en un algoritmo de optimización y así encontrar un estimador similar al MAP. La segunda parte de esta tesis se concentra en el modelo Variational AutoEncoder (VAE) y algunas de sus variantes que han mostrado capacidad para capturar explícitamente la distribución de probabilidad de conjuntos de datos en alta dimensión como las imágenes. Basándonos en estos modelos VAE, proponemos dos formas de incorporarlos como distribución a priori para problemas inversos genéricos en imágenes : •El primero (Capítulo 4) calcula un estimador MAP conjunto (espacio imagen y latente) llamado Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Mostramos evidencia teórica y experimental de que la función objetivo propuesta satisface una propiedad de biconvexidad débil que es suficiente para garantizar que nuestro esquema de optimización converge a un punto estacionario. Los resultados experimentales también muestran la mayor calidad de las soluciones obtenidas por nuestro enfoque JPMAP con respecto a otros enfoques MAP no convexos que a menudo se atascan en mínimos locales espurios. •El segundo (Capítulo 5) desarrolla un algoritmo de muestreo tipo Gibbs parala exploración de la distribución a posteriori de problemas inversos utilizando múltiples cadenas y un VAE como distribución a priori. Mostramos cómo usar esas muestras para obtener estimaciones de MMSE y su correspondiente incertidumbr
    • …
    corecore