3 research outputs found

    Robot Patrolling for Stochastic and Adversarial Events

    Get PDF
    In this thesis, we present and analyze two robot patrolling problems. The first problem discusses stochastic patrolling strategies in adversarial environments where intruders use the information about a patrolling path to increase chances of successful attacks on the environment. We use Markov chains to design the random patrolling paths on graphs. We present four different intruder models, each of which use the information about patrolling paths in a different manner. We characterize the expected rewards for those intruder models as a function of the Markov chain that is being used for patrolling. We show that minimizing the reward functions is a non convex constrained optimization problem in general. We then discuss the application of different numerical optimization methods to minimize the expected reward for any given type of intruder and propose a pattern search algorithm to determine a locally optimal patrolling strategy. We also show that for a certain type of intruder, a deterministic patrolling policy given by the orienteering tour of the graph is the optimal patrolling strategy. The second problem that we define and analyze is the Event Detection and Confirmation Problem in which the events arrive randomly on the vertices of a graph and stay active for a random amount of time. The events that stay longer than a certain amount of time are defined to be true events. The monitoring robot can traverse the graph to detect newly arrived events and can revisit these events in order to classify them as true events. The goal is to maximize the number of true events that are correctly classified by the robot. We show that the off-line version of the problem is NP-hard. We then consider a simple patrolling policy based on the TSP tour of the graph and characterize the probability of correctly classifying a true event. We investigate the problem when multiple robots follow the same path, and show that the optimal spacing between the robots in that case can be non uniform

    Multi-Agent Distributed Optimization and Estimation over Lossy Networks

    Get PDF
    Nowadays, optimization is a pervasive tool, employed in a lot different fields. Due to its flexibility, it can be used to solve many diverse problems, some of which do not seem to require an optimization framework. As so, the research on this topic is always active and copious. Another very interesting and current investigation field involves multi-agent systems, that is, systems composed by a lot of (possibly different) agents. The research on cyber-physical systems, believed to be one of the challenges of the 21st century, is very extensive, and comprises very complex systems like smart cities and smart power-grids, but also much more simple ones, like wireless sensor networks or camera networks. In a multi-agent context, the optimization framework is extensively used. As a consequence, optimization in multi-agent systems is an attractive topic to investigate. The contents of this thesis focus on distributed optimization within a multi-agent scenario, i.e., optimization performed by a set of peers, among which there is no leader. Accordingly, when these agents have to perform a task, formulated as an optimization problem, they have to collaborate to solve it, all using the same kind of update rule. Collaboration clearly implies the need of messages exchange among the agents, and the focus of the thesis is on the criticalities related to the communication step. In particular, no reliability of this step is assumed, meaning that the packets exchanged between two agents can sometime be lost. Also, the sought-for solution does not have to employ an acknowledge protocol, that is, when an agent has to send a packet, it just sends it and goes on with its computation, without waiting for a confirmation that the receiver has actually received the packet. Almost all works in the existing literature deal with packet losses employing an acknowledge (ACK) system; the effort in this thesis is to avoid the use of an ACK system, since it can slow down the communication step. However, this choice of averting the use of ACKs makes the development of optimization algorithms, and especially their convergence proof, more involved. Apart from robustness to packet losses, the algorithms developed in this dissertation are also asynchronous, that is, the agents do not need to be synchronized to perform the update and communication steps. Three types of optimization problems are analyzed in the thesis. The first one is the patrolling problem for camera networks. The algorithm developed to solve this problem has a restricted applicability, since it is very task-dependent. The other two problems are more general, because both concern the minimization of the sum of cost functions, one for each agent in the system. In the first case, the form of the local cost functions is particular: these, in fact, are locally coupled, in the sense that the cost function of an agent depends on the variables of the agent itself and on those of its direct neighbors. The sought-for algorithm has to satisfy two properties (apart from asynchronicity and robustness to packet losses): the requirement of asking a single communication exchange per iteration (which also reduces the need of synchronicity) and the requirement that the communication among agents is only between direct neighbors. In the second case, the local functions depend all on the same variables. The analysis first focuses on the special case of local quadratic cost functions and their strong relationship with the consensus problem. Besides the development of a robust and asynchronous algorithm for the average consensus problem, a comparison among algorithms to solve the minimization of the sum of quadratic cost functions is carried out. Finally, the distributed minimization of the sum of more general local cost functions is tackled, leading to the development of a robust version of the Newton-Raphson consensus. The theoretical tools employed in the thesis to prove convergence of the algorithms mainly rely on Lyapunov theory and the separation of scales theory
    corecore