117 research outputs found

    Towards temporal verification of swarm robotic systems

    Get PDF
    A robot swarm is a collection of simple robots designed to work together to carry out some task. Such swarms rely on the simplicity of the individual robots; the fault tolerance inherent in having a large population of identical robots; and the self-organised behaviour of the swarm as a whole. Although robot swarms present an attractive solution to demanding real-world applications, designing individual control algorithms that can guarantee the required global behaviour is a difficult problem. In this paper we assess and apply the use of formal verification techniques for analysing the emergent behaviours of robotic swarms. These techniques, based on the automated analysis of systems using temporal logics, allow us to analyse whether all possible behaviours within the robot swarm conform to some required specification. In particular, we apply model-checking, an automated and exhaustive algorithmic technique, to check whether temporal properties are satisfied on all the possible behaviours of the system. We target a particular swarm control algorithm that has been tested in real robotic swarms, and show how automated temporal analysis can help to refine and analyse such an algorithm. © 2012 Elsevier B.V. All rights reserved

    A general architecture for robotic swarms

    Get PDF
    Swarms are large groups of simplistic individuals that collectively solve disproportionately complex tasks. Individual swarm agents are limited in perception, mechanically simple, have no global knowledge and are cheap, disposable and fallible. They rely exclusively on local observations and local communications. A swarm has no centralised control. These features are typifed by eusocial insects such as ants and termites, who construct nests, forage and build complex societies comprised of primitive agents. This project created the basis of a general swarm architecture for the control of insect-like robots. The Swarm Architecture is inspired by threshold models of insect behaviour and attempts to capture the salient features of the hive in a closely defined computer program that is hardware agnostic, swarm size indifferent and intended to be applicable to a wide range of swarm tasks. This was achieved by exploiting the inherent limitations of swarm agents. Individual insects were modelled as a machine capable only of perception, locomotion and manipulation. This approximation reduced behaviour primitives to a fixed tractable number and abstracted sensor interpretation. Cooperation was achieved through stigmergy and decisions made via a behaviour threshold model. The Architecture represents an advance on previous robotic swarms in its generality - swarm control software has often been tied to one task and robot configuration. The Architecture's exclusive focus on swarms, sets it apart from existing general cooperative systems, which are not usually explicitly swarm orientated. The Architecture was implemented successfully on both simulated and real-world swarms

    Formal Analysis of Artificial Collectives using Parametric Markov Models

    Get PDF
    There are many potential applications for the deployment of distributed systems composed of identical autonomous agents such as swarm robotic systems or wireless sensor networks, including remote monitoring, space exploration, or environmental clean up. Such systems need to be robust, and the loss of a small number of agents should not compromise the effectiveness of the system as they will often operate in hostile environments where individual members of that system may suffer failures, or communication may be hindered. To address this, these artificial systems are often designed to imitate the behaviour of self-organising systems found in nature, where simple reactive behaviours for individual members of a system can lead to complex global behaviours, and the collective remains robust to the loss of individuals. Despite much research being conducted into the development and evaluation of these systems, the industrial application of these technologies is still low. This issue could be addressed by further demonstrating that they can reliably, and predictably, achieve given objectives. Designing such systems is challenging, and often detailed simulations are developed for their analysis. Simulations give invaluable insight into the behaviour of such a system, however, there are often corner cases that might be overlooked. By developing a formal model of the system using some appropriate formalism, mathematical techniques can be applied during development to ensure that the system behaves correctly with respect to some given specification. These dynamic and inherently stochastic systems can be modelled as Markov processes; memoryless stochastic processes whose behaviour at any moment in time is determined solely by their current state. Model checking is an algorithmic technique to exhaustively check that a representation of a system as a Markov process exhibits some desirable property; furthermore, such an analysis can be extended to analyse systems whose parameters may not be known in an advance. However, the analysis of formal models of large systems is limited due to the resources that are required for their analysis: the size of the model may grow exponentially with the size of the system, and the subsequent analysis may prove to be impossible due to hardware or time constraints. This thesis investigates the suitability of parametric Markov models for the analysis of swarm robotic systems and wireless sensor networks. The analysis of such models is costly in terms of the size of the formal model representing a system, and the computation time required for its subsequent analysis. Modelling techniques and abstractions are developed for the construction of macroscopic models that abstract away from the identities of individual swarm robots or sensor nodes, and instead focus on the desirable global behaviours of such a system, resulting in smaller formal models. New techniques are then introduced to facilitate the analysis of large families of such models, where similarities between models who share some parameter values are exploited to speed up their analysis. In addition, new representations for such models are developed that allow for larger models to be analysed, and also significantly reduce the time required for that analysis

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    A Corroborative Approach to Verification and Validation of Human--Robot Teams

    Get PDF
    We present an approach for the verification and validation (V&V) of robot assistants in the context of human-robot interactions (HRI), to demonstrate their trustworthiness through corroborative evidence of their safety and functional correctness. Key challenges include the complex and unpredictable nature of the real world in which assistant and service robots operate, the limitations on available V&V techniques when used individually, and the consequent lack of confidence in the V&V results. Our approach, called corroborative V&V, addresses these challenges by combining several different V&V techniques; in this paper we use formal verification (model checking), simulation-based testing, and user validation in experiments with a real robot. We demonstrate our corroborative V&V approach through a handover task, the most critical part of a complex cooperative manufacturing scenario, for which we propose some safety and liveness requirements to verify and validate. We construct formal models, simulations and an experimental test rig for the HRI. To capture requirements we use temporal logic properties, assertion checkers and textual descriptions. This combination of approaches allows V&V of the HRI task at different levels of modelling detail and thoroughness of exploration, thus overcoming the individual limitations of each technique. Should the resulting V&V evidence present discrepancies, an iterative process between the different V&V techniques takes place until corroboration between the V&V techniques is gained from refining and improving the assets (i.e., system and requirement models) to represent the HRI task in a more truthful manner. Therefore, corroborative V&V affords a systematic approach to 'meta-V&V,' in which different V&V techniques can be used to corroborate and check one another, increasing the level of certainty in the results of V&V

    Foundations of Trusted Autonomy

    Get PDF
    Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologie

    Mindedness: On the minimal conditions for possessing a mind

    Get PDF
    This thesis explores the grounds for justifying the ascription of mentality to non-human agents. In the first part, I set my research within the framework of scientific naturalism and the computational theory of mind. Then I argue that while the behaviour of certain agents demands a computational explanation, there is no justification for attributing mentality to them. I use these examples to backup my claim that some authors indulge in unnecessary ascription of mentality to certain animals (e.g. insects) on the main grounds that they possess computational capacities. The second part of my thesis takes up recent literature exploring the line that divides computational agents with and without mentality. More precisely, I criticise the proposals put forward by Fodor, Dretske, Burge, Bermúdez and Carruthers. My main argument takes the form of a reductio ad absurdum by showing that their criteria apply to artefacts to which the attribution of mentality is unjustified. Overall, I conclude that even though the views advanced by the mentioned authors help to elucidate the computational grounds that could make the emergence of a mind possible, they do not offer a satisfactory criterion for the ascription of mentality to some computational agents but not others. In the final part I develop my own proposal for grounding the attribution of mentality. My strategy consists in drawing upon the distinction between personal and subpersonal levels of explanation, according to which properly psychological descriptions have whole-agents as their subject matter, use a distinctive theoretical vocabulary, and are constrained by norms of rationality. After showing that the personal-subpersonal distinction is compatible with a naturalistic framework, I adapt the distinction so that it can be applied to non-human agents, and conclude that it imposes constraints in cognitive architecture that point in the direction of cognitive access, generality and integration
    corecore