15 research outputs found
A runtime safety analysis concept for open adaptive systems
Β© Springer Nature Switzerland AG 2019. In the automotive industry, modern cyber-physical systems feature cooperation and autonomy. Such systems share information to enable collaborative functions, allowing dynamic component integration and architecture reconfiguration. Given the safety-critical nature of the applications involved, an approach for addressing safety in the context of reconfiguration impacting functional and non-functional properties at runtime is needed. In this paper, we introduce a concept for runtime safety analysis and decision input for open adaptive systems. We combine static safety analysis and evidence collected during operation to analyse, reason and provide online recommendations to minimize deviation from a systemβs safe states. We illustrate our concept via an abstract vehicle platooning system use case
AERoS: Assurance of Emergent Behaviour in Autonomous Robotic Swarms
The behaviours of a swarm are not explicitly engineered. Instead, they are an
emergent consequence of the interactions of individual agents with each other
and their environment. This emergent functionality poses a challenge to safety
assurance. The main contribution of this paper is a process for the safety
assurance of emergent behaviour in autonomous robotic swarms called AERoS,
following the guidance on the Assurance of Machine Learning for use in
Autonomous Systems (AMLAS). We explore our proposed process using a case study
centred on a robot swarm operating a public cloakroom.Comment: 12 pages, 11 figure
Recommended from our members
Towards a resilience assurance model for robotic autonomous systems
yesApplications of autonomous systems are becoming increasingly common across the field of engineered systems from cars, drones, manufacturing systems and medical devices, addressing prevailing societal changes, and, increasingly, consumer demand. Autonomous systems are expected to self-manage and self-certify against risks affecting the mission, safety and asset integrity. While significant progress has been achieved in relation to the modelling of safety and safety assurance of autonomous systems, no similar approach is available for resilience that integrates coherently across the cyber and physical parts. This paper presents a comprehensive discussion of resilience in the context of robotic autonomous systems, covering both resilience by design and resilience by reaction, and proposes a conceptual model of a system of learning for resilience assurance in a continuous product development framework. The resilience assurance model is proposed as a composable digital artefact, underpinned by a rigorous model-based resilience analysis at the system design stage, and dynamically monitored and continuously updated at run time in the system operation stage, with machine learning based knowledge extraction and validation
Distributed Graph Queries for Runtime Monitoring of Cyber-Physical Systems
In safety-critical cyber-physical systems (CPS), a service failure may result in severe financial loss or damage in human life. Smart CPSs have complex interaction with their environment which is rarely known in advance, and they heavily depend on intelligent data processing carried out over a heterogeneous computation platform and provide autonomous behavior. This complexity makes design time verification infeasible in practice, and many CPSs need advanced runtime monitoring techniques to ensure safe operation. While graph queries are a powerful technique used in many industrial design tools of CPSs, in this paper, we propose to use them to specify safety properties for runtime monitors on a high-level of abstraction. Distributed runtime monitoring is carried out by evaluating graph queries over a distributed runtime model of the system which incorporates domain concepts and platform information. We
provide a semantic treatment of distributed graph queries using 3-valued logic. Our approach is illustrated and an initial evaluation is carried out using an educational demonstrator of CPSs
ΠΠ½Π»Π°ΠΉΠ½ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈΡ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΉ ΠΏΠΎ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΊ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ°ΠΌ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ
Self-adaptation of complex systems is a very active domain of research with numerous application domains. Component systems are designed as sets of components that may reconfigure themselves according to adaptation policies, which describe needs for reconfiguration. In this context, an adaptation policy is designed as a set of rules that indicate, for a given set of configurations, which reconfiguration operations can be triggered, with fuzzy values representing their utility. The adaptation policy has to be faithfully implemented by the system, especially w.r.t. the utility occurring in the rules, which are generally specified for optimizing some extra-functional properties (e.g. minimizing resource consumption). In order to validate adaptive systemsβ behaviour, this paper presents a model-based testing approach, which aims to generate large test suites in order to measure the occurrences of reconfigurations and compare them to their utility values specified in the adaptation rules. This process is based on a usage model of the system used to stimulate the system and provoke reconfigurations. As the system may reconfigure dynamically, this online test generator observes the system responses and evolution in order to decide the next appropriate test step to perform. As a result, the relative frequencies of the reconfigurations can be measured in order to determine whether the adaptation policy is faithfully implemented. To illustrate the approach the paper reports on experiments on the case study of platoons of autonomous vehicles.Π‘Π°ΠΌΠΎΠ°Π΄Π°ΠΏΡΠ°ΡΠΈΡ ΡΠ»ΠΎΠΆΠ½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ ΡΠ²Π»ΡΠ΅ΡΡΡ Π°ΠΊΡΠΈΠ²Π½ΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΡΡ ΡΠ΅ΠΎΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΡ
ΠΈ ΠΏΡΠΈΠΊΠ»Π°Π΄Π½ΡΡ
ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΠΉ, ΠΈΠΌΠ΅ΡΡΠ΅ΠΉ ΡΡΠ΅Π·Π²ΡΡΠ°ΠΉΠ½ΠΎ ΡΠΈΡΠΎΠΊΠΈΠΉ ΡΠΏΠ΅ΠΊΡΡ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ. ΠΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠ½ΠΎ-ΠΎΡΠΈΠ΅Π½ΡΠΈΡΠΎΠ²Π°Π½Π½ΡΠ΅ Π°Π΄Π°ΠΏΡΠΈΠ²Π½ΡΠ΅ ΡΠΈΡΡΠ΅ΠΌΡ ΡΠ°Π·ΡΠ°Π±Π°ΡΡΠ²Π°ΡΡΡΡ Π½Π° Π±Π°Π·Π΅ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ, ΠΊΠΎΡΠΎΡΡΠ΅ ΠΌΠΎΠ³ΡΡ ΠΏΠ΅ΡΠ΅Π½Π°ΡΡΡΠΎΠΈΡΡΡΡ Π² ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΠΈΠΈ Ρ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ°ΠΌΠΈ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ, ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΌΠΈ ΠΏΠΎΡΡΠ΅Π±Π½ΠΎΡΡΠΈ Π² ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠΈ. Π ΡΡΠΎΠΌ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΠ΅ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ° Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»ΡΠ΅Ρ ΡΠΎΠ±ΠΎΠΉ Π½Π°Π±ΠΎΡ ΠΏΡΠ°Π²ΠΈΠ», ΠΊΠΎΡΠΎΡΡΠ΅ ΡΠΊΠ°Π·ΡΠ²Π°ΡΡ Π΄Π»Ρ Π΄Π°Π½Π½ΠΎΠ³ΠΎ ΠΌΠ½ΠΎΠΆΠ΅ΡΡΠ²Π° ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΉ, ΠΊΠ°ΠΊΠΈΠ΅ ΠΎΠΏΠ΅ΡΠ°ΡΠΈΠΈ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΈΠ½ΠΈΡΠΈΠΈΡΠΎΠ²Π°Π½Ρ, ΠΏΡΠΈ ΡΡΠΎΠΌ ΠΈΡ
ΠΏΠΎΠ»Π΅Π·Π½ΠΎΡΡΡ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½Π° Π½Π΅ΡΠ΅ΡΠΊΠΈΠΌΠΈ Π·Π½Π°ΡΠ΅Π½ΠΈΡΠΌΠΈ. ΠΡΠ°Π²ΠΈΠ»Π° ΠΎΠ±ΡΡΠ½ΠΎ ΡΠ°Π·ΡΠ°Π±Π°ΡΡΠ²Π°ΡΡΡΡ Π΄Π»Ρ ΠΎΠΏΡΠΈΠΌΠΈΠ·Π°ΡΠΈΠΈ Π½Π΅ΠΊΠΎΡΠΎΡΡΡ
Π½Π΅ΡΡΠ½ΠΊΡΠΈΠΎΠ½Π°Π»ΡΠ½ΡΡ
ΡΠ²ΠΎΠΉΡΡΠ², Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ, ΠΌΠΈΠ½ΠΈΠΌΠΈΠ·Π°ΡΠΈΠΈ ΠΏΠΎΡΡΠ΅Π±Π»Π΅Π½ΠΈΡ ΡΠ΅ΡΡΡΡΠΎΠ², ΠΏΠΎΡΡΠΎΠΌΡ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΡ ΡΠΈΡΡΠ΅ΠΌΡ Ρ ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ°ΠΌΠΈ Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ Π΄ΠΎΠ»ΠΆΠ½Π° Π±ΡΡΡ ΡΠΎΡΠ½ΠΎΠΉ, ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎ ΠΏΠΎ ΠΎΡΠ½ΠΎΡΠ΅Π½ΠΈΡ ΠΊ ΠΎΠΏΠΈΡΠ°Π½Π½ΠΎΠΉ Π² ΠΏΡΠ°Π²ΠΈΠ»Π°Ρ
ΠΏΠΎΠ»Π΅Π·Π½ΠΎΡΡΠΈ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. Π‘ ΡΠ΅Π»ΡΡ Π²Π°Π»ΠΈΠ΄Π°ΡΠΈΠΈ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΡ Π°Π΄Π°ΠΏΡΠΈΠ²Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ Π² ΡΡΠΎΠΉ ΡΡΠ°ΡΡΠ΅ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»Π΅Π½ ΠΌΠΎΠ΄Π΅Π»ΡΠ½ΠΎ-ΠΎΡΠΈΠ΅Π½ΡΠΈΡΠΎΠ²Π°Π½Π½ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ ΠΊ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ, ΠΊΠΎΡΠΎΡΡΠΉ Π½Π°ΠΏΡΠ°Π²Π»Π΅Π½ Π½Π° ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ Π±ΠΎΠ»ΡΡΠΈΡ
Π½Π°Π±ΠΎΡΠΎΠ² ΡΠ΅ΡΡΠΎΠ² Π΄Π»Ρ ΠΎΡΠ΅Π½ΠΊΠΈ ΡΠ»ΡΡΠ°Π΅Π² ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΈ ΡΡΠ°Π²Π½Π΅Π½ΠΈΡ ΡΠ°ΡΡΠΎΡΡ ΡΡΠΈΡ
ΡΠ»ΡΡΠ°Π΅Π² ΡΠΎ Π·Π½Π°ΡΠ΅Π½ΠΈΡΠΌΠΈ ΠΏΠΎΠ»Π΅Π·Π½ΠΎΡΡΠΈ, ΠΎΠΏΠΈΡΠ°Π½Π½ΡΠΌΠΈ Π² ΠΏΡΠ°Π²ΠΈΠ»Π°Ρ
Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ. ΠΡΠΎΡ ΠΏΡΠΎΡΠ΅ΡΡ ΠΎΡΠ½ΠΎΠ²Π°Π½ Π½Π° ΠΌΠΎΠ΄Π΅Π»ΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌΡ Π² Π΅Π΅ ΡΡΠ΅Π΄Π΅, Π΄Π»Ρ ΡΡΠΈΠΌΡΠ»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΅Π΅ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠΉ. ΠΠΎΡΠΊΠΎΠ»ΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΠ° ΠΌΠΎΠΆΠ΅Ρ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈ ΠΈΠ·ΠΌΠ΅Π½ΡΡΡ ΡΠ²ΠΎΡ Π°ΡΡ
ΠΈΡΠ΅ΠΊΡΡΡΡ, ΡΡΠΎΡ Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡ ΡΠ΅ΡΡΠΎΠ² Π½Π°Π±Π»ΡΠ΄Π°Π΅Ρ Π·Π° ΠΎΡΠΊΠ»ΠΈΠΊΠ°ΠΌΠΈ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π° ΡΠΎΠ±ΡΡΠΈΡ ΠΈ Π΅Π΅ ΠΈΠ·ΠΌΠ΅Π½Π΅Π½ΠΈΡΠΌΠΈ Π² ΡΠ΅ΠΆΠΈΠΌΠ΅ ΠΎΠ½Π»Π°ΠΉΠ½, ΡΡΠΎΠ±Ρ ΡΠ΅ΡΠΈΡΡ ΠΊΠ°ΠΊΠΈΠΌ Π±ΡΠ΄Π΅Ρ ΡΠ»Π΅Π΄ΡΡΡΠΈΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΡΡΠΈΠΉ ΡΠ°Π³ ΡΠ΅ΡΡΠ°. Π ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠ΅ ΠΎΡΠ½ΠΎΡΠΈΡΠ΅Π»ΡΠ½ΡΠ΅ ΡΠ°ΡΡΠΎΡΡ ΡΠ΅ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠΉ ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΈΠ·ΠΌΠ΅ΡΠ΅Π½Ρ, ΡΡΠΎΠ±Ρ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΡΡ, ΠΏΡΠ°Π²ΠΈΠ»ΡΠ½ΠΎ Π»ΠΈ ΡΠ΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½Π° ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ° Π°Π΄Π°ΠΏΡΠ°ΡΠΈΠΈ. Π§ΡΠΎΠ±Ρ ΠΏΡΠΎΠΈΠ»Π»ΡΡΡΡΠΈΡΠΎΠ²Π°ΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄, ΡΡΠ°ΡΡΡ ΠΎΠΏΠΈΡΡΠ²Π°Π΅Ρ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΡ ΠΏΠΎ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΡ ΠΊΠΎΠ»ΠΎΠ½Π½ Π°Π²ΡΠΎΠ½ΠΎΠΌΠ½ΡΡ
ΠΌΠ°ΡΠΈΠ½
REACT-ION: A model-based runtime environment for situation-aware adaptations
Trends such as the Internet of Things lead to a growing number of networked devices and to a variety of communication systems. Adding self-adaptive capabilities to these communication systems is one approach to reducing administrative effort and coping with changing execution contexts. Existing frameworks can help reducing development effort but are neither tailored toward the use in communication systems nor easily usable without knowledge in self-adaptive systems development. Accordingly, in previous work, we proposed REACT, a reusable, model-based runtime environment to complement communication systems with adaptive behavior. REACT addresses heterogeneity and distribution aspects of such systems and reduces development effort. In this article, we propose REACT-IONβan extension of REACT for situation awareness. REACT-ION offers a context management module that is able to acquire, store, disseminate, and reason on context data. The context management module is the basis for (i) proactive adaptation with REACT-ION and (ii) self-improvement of the underlying feedback loop. REACT-ION can be used to optimize adaptation decisions at runtime based on the current situation. Therefore, it can cope with uncertainty and situations that were not foreseeable at design time. We show and evaluate in two case studies how REACT-IONβs situation awareness enables proactive adaptation and self-improvement