24,384 research outputs found

    Quantifying Robotic Swarm Coverage

    Full text link
    In the field of swarm robotics, the design and implementation of spatial density control laws has received much attention, with less emphasis being placed on performance evaluation. This work fills that gap by introducing an error metric that provides a quantitative measure of coverage for use with any control scheme. The proposed error metric is continuously sensitive to changes in the swarm distribution, unlike commonly used discretization methods. We analyze the theoretical and computational properties of the error metric and propose two benchmarks to which error metric values can be compared. The first uses the realizable extrema of the error metric to compute the relative error of an observed swarm distribution. We also show that the error metric extrema can be used to help choose the swarm size and effective radius of each robot required to achieve a desired level of coverage. The second benchmark compares the observed distribution of error metric values to the probability density function of the error metric when robot positions are randomly sampled from the target distribution. We demonstrate the utility of this benchmark in assessing the performance of stochastic control algorithms. We prove that the error metric obeys a central limit theorem, develop a streamlined method for performing computations, and place the standard statistical tests used here on a firm theoretical footing. We provide rigorous theoretical development, computational methodologies, numerical examples, and MATLAB code for both benchmarks.Comment: To appear in Springer series Lecture Notes in Electrical Engineering (LNEE). This book contribution is an extension of our ICINCO 2018 conference paper arXiv:1806.02488. 27 pages, 8 figures, 2 table

    Planning for Decentralized Control of Multiple Robots Under Uncertainty

    Full text link
    We describe a probabilistic framework for synthesizing control policies for general multi-robot systems, given environment and sensor models and a cost function. Decentralized, partially observable Markov decision processes (Dec-POMDPs) are a general model of decision processes where a team of agents must cooperate to optimize some objective (specified by a shared reward or cost function) in the presence of uncertainty, but where communication limitations mean that the agents cannot share their state, so execution must proceed in a decentralized fashion. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved as a Dec-POMDP. We describe this general model, and show how, in contrast to most existing methods that are specialized to a particular problem class, it can synthesize control policies that use whatever opportunities for coordination are present in the problem, while balancing off uncertainty in outcomes, sensor information, and information about other agents. We use three variations on a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate

    Quantitative Assessment of Robotic Swarm Coverage

    Full text link
    This paper studies a generally applicable, sensitive, and intuitive error metric for the assessment of robotic swarm density controller performance. Inspired by vortex blob numerical methods, it overcomes the shortcomings of a common strategy based on discretization, and unifies other continuous notions of coverage. We present two benchmarks against which to compare the error metric value of a given swarm configuration: non-trivial bounds on the error metric, and the probability density function of the error metric when robot positions are sampled at random from the target swarm distribution. We give rigorous results that this probability density function of the error metric obeys a central limit theorem, allowing for more efficient numerical approximation. For both of these benchmarks, we present supporting theory, computation methodology, examples, and MATLAB implementation code.Comment: Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Porto, Portugal, 29--31 July 2018. 11 pages, 4 figure

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions

    Intrinsic Motivation Systems for Autonomous Mental Development

    Get PDF
    Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robot’s activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology. Key words: Active learning, autonomy, behavior, complexity, curiosity, development, developmental trajectory, epigenetic robotics, intrinsic motivation, learning, reinforcement learning, values

    Efficient Contact State Graph Generation for Assembly Applications

    Get PDF
    An important aspect in the design of many automated assembly strategies is the ability to automatically generate the set of contact states that may occur during an assembly task. In this paper, we present an efficient means of constructing the set of all geometrically feasible contact states that may occur within a bounded set of misalignments (bounds determined by robot inaccuracy). This set is stored as a graph, referred to as an Assembly Contact State Graph (ACSG), which indicates neighbor relationships between feasible states. An ACSG is constructed without user intervention in two stages. In the first stage, all hypothetical primitive principle contacts (PPCs; all contact states allowing 5 degrees of freedom) are evaluated for geometric feasibility with respect to part-imposed and robot-imposed restrictions on relative positioning (evaluated using optimization). In the second stage, the feasibility of each of the various combinations of PPCs is efficiently evaluated, first using topological existence and uniqueness criteria, then using part-imposed and robot-imposed geometric criteria

    Certified Impossibility Results for Byzantine-Tolerant Mobile Robots

    Get PDF
    We propose a framework to build formal developments for robot networks using the COQ proof assistant, to state and to prove formally various properties. We focus in this paper on impossibility proofs, as it is natural to take advantage of the COQ higher order calculus to reason about algorithms as abstract objects. We present in particular formal proofs of two impossibility results forconvergence of oblivious mobile robots if respectively more than one half and more than one third of the robots exhibit Byzantine failures, starting from the original theorems by Bouzid et al.. Thanks to our formalization, the corresponding COQ developments are quite compact. To our knowledge, these are the first certified (in the sense of formally proved) impossibility results for robot networks
    corecore