1,049 research outputs found

    A methodology for provably stable behaviour-based intelligent control

    Get PDF
    This paper presents a design methodology for a class of behaviour-based control systems, arguing its potential for application to safety critical systems. We propose a formal basis for subsumption architecture design based on two extensions to Lyapunov stability theory, the Second Order Stability Theorems, and interpretations of system safety and liveness in Lyapunov stability terms. The subsumption of the new theorems by the classical stability theorems serves as a model of dynamical subsumption, forming the basis of the design methodology. Behaviour-based control also offers the potential for using simple computational mechanisms, which will simplify the safety assurance process. © 2005 Elsevier B.V. All rights reserved

    Engineering Resilient Collective Adaptive Systems by Self-Stabilisation

    Get PDF
    Collective adaptive systems are an emerging class of networked computational systems, particularly suited in application domains such as smart cities, complex sensor networks, and the Internet of Things. These systems tend to feature large scale, heterogeneity of communication model (including opportunistic peer-to-peer wireless interaction), and require inherent self-adaptiveness properties to address unforeseen changes in operating conditions. In this context, it is extremely difficult (if not seemingly intractable) to engineer reusable pieces of distributed behaviour so as to make them provably correct and smoothly composable. Building on the field calculus, a computational model (and associated toolchain) capturing the notion of aggregate network-level computation, we address this problem with an engineering methodology coupling formal theory and computer simulation. On the one hand, functional properties are addressed by identifying the largest-to-date field calculus fragment generating self-stabilising behaviour, guaranteed to eventually attain a correct and stable final state despite any transient perturbation in state or topology, and including highly reusable building blocks for information spreading, aggregation, and time evolution. On the other hand, dynamical properties are addressed by simulation, empirically evaluating the different performances that can be obtained by switching between implementations of building blocks with provably equivalent functional properties. Overall, our methodology sheds light on how to identify core building blocks of collective behaviour, and how to select implementations that improve system performance while leaving overall system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio

    Towards adaptive multi-robot systems: self-organization and self-adaptation

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The development of complex systems ensembles that operate in uncertain environments is a major challenge. The reason for this is that system designers are not able to fully specify the system during specification and development and before it is being deployed. Natural swarm systems enjoy similar characteristics, yet, being self-adaptive and being able to self-organize, these systems show beneficial emergent behaviour. Similar concepts can be extremely helpful for artificial systems, especially when it comes to multi-robot scenarios, which require such solution in order to be applicable to highly uncertain real world application. In this article, we present a comprehensive overview over state-of-the-art solutions in emergent systems, self-organization, self-adaptation, and robotics. We discuss these approaches in the light of a framework for multi-robot systems and identify similarities, differences missing links and open gaps that have to be addressed in order to make this framework possible

    Rethink the Adversarial Scenario-based Safety Testing of Robots: the Comparability and Optimal Aggressiveness

    Full text link
    This paper studies the class of scenario-based safety testing algorithms in the black-box safety testing configuration. For algorithms sharing the same state-action set coverage with different sampling distributions, it is commonly believed that prioritizing the exploration of high-risk state-actions leads to a better sampling efficiency. Our proposal disputes the above intuition by introducing an impossibility theorem that provably shows all safety testing algorithms of the aforementioned difference perform equally well with the same expected sampling efficiency. Moreover, for testing algorithms covering different sets of state-actions, the sampling efficiency criterion is no longer applicable as different algorithms do not necessarily converge to the same termination condition. We then propose a testing aggressiveness definition based on the almost safe set concept along with an unbiased and efficient algorithm that compares the aggressiveness between testing algorithms. Empirical observations from the safety testing of bipedal locomotion controllers and vehicle decision-making modules are also presented to support the proposed theoretical implications and methodologies

    How Society Can Maintain Human-Centric Artificial Intelligence

    Get PDF
    Although not a goal universally held, maintaining human-centric artificial intelligence is necessary for society's long-term stability. Fortunately, the legal and technological problems of maintaining control are actually fairly well understood and amenable to engineering. The real problem is establishing the social and political will for assigning and maintaining accountability for artifacts when these artefacts are generated or used. In this chapter we review the necessity and tractability of maintaining human control, and the mechanisms by which such control can be achieved. What makes the problem both most interesting and most threatening is that achieving consensus around any human-centred approach requires at least some measure of agreement on broad existential concerns

    Specification Patterns for Robotic Missions

    Get PDF
    Mobile and general-purpose robots increasingly support our everyday life, requiring dependable robotics control software. Creating such software mainly amounts to implementing their complex behaviors known as missions. Recognizing the need, a large number of domain-specific specification languages has been proposed. These, in addition to traditional logical languages, allow the use of formally specified missions for synthesis, verification, simulation, or guiding the implementation. For instance, the logical language LTL is commonly used by experts to specify missions, as an input for planners, which synthesize the behavior a robot should have. Unfortunately, domain-specific languages are usually tied to specific robot models, while logical languages such as LTL are difficult to use by non-experts. We present a catalog of 22 mission specification patterns for mobile robots, together with tooling for instantiating, composing, and compiling the patterns to create mission specifications. The patterns provide solutions for recurrent specification problems, each of which detailing the usage intent, known uses, relationships to other patterns, and---most importantly---a template mission specification in temporal logic. Our tooling produces specifications expressed in the LTL and CTL temporal logics to be used by planners, simulators, or model checkers. The patterns originate from 245 realistic textual mission requirements extracted from the robotics literature, and they are evaluated upon a total of 441 real-world mission requirements and 1251 mission specifications. Five of these reflect scenarios we defined with two well-known industrial partners developing human-size robots. We validated our patterns' correctness with simulators and two real robots

    Perancangan Autonomous Landing pada Quadcopter Menggunakan Behavior-Based Intelligent Fuzzy Control

    Get PDF
    Quadcopter adalah salah satu platform unmanned aerial vehicle (UAV) yang saat ini banyak diriset karena kemampuannya melakukan take-off dan landing secara vertikal. Karena menggunakan 4 motor brushless sebagai penggerak utama, quadcopter memiliki kompleksitas yang cukup tinggi baik dalam pemodelan maupun pengendalian. Landing merupakan salah satu mekanisme pada quadcopter yang membutuhkan kecepatan yang akurat dan aman dengan tetap mempertahankan keseimbangan. Pada penelitian ini, penulis menggunakan Behavior-Based Intelligent Fuzzy Control (BBIFC) sebagai dasar kontrol untuk penerapan autonomous landing pada quadcopter. BBIFC adalah salah satu skema high-level control di mana desain kontrol terdiri dari beberapa layer. Ada 2 layer yang digunakan pada penelitian ini yaitu layer untuk pengendalian sudut pitch, roll, yaw dan layer untuk pengendalian ketinggian. Setiap layer memiliki mekanisme kontrol yang berbeda yang didesain menggunakan Intelligent Fuzzy Controller dan kontroler PID. Dengan metode ini dihasilkan algoritma untuk mekanisme safe autonomous landing dengan mengikuti sinyal eksponensial di mana quadcopter mencapai titik 0 (nol) meter dalam waktu 15 detik dan Kontroler PID dapat mengendalikan keseimbangan quadcopter dalam waktu 7.97 detik untuk roll dan pitch serta 1.25 detik untuk yaw sejak gangguan sudut diberikan

    On Controllability of Artificial Intelligence

    Get PDF
    Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability
    • …
    corecore