1,049 research outputs found

    Towards Practical Runtime Verification and Validation of Self-Adaptive Software Systems

    Get PDF
    International audienceSoftware validation and verification (V&V) ensures that software products satisfy user requirements and meet their expected quality attributes throughout their lifecycle. While high levels of adaptation and autonomy provide new ways for software systems to operate in highly dynamic environments, developing certifiable V&V methods for guaranteeing the achievement of self-adaptive software goals is one of the major challenges facing the entire research field. In this chapter we (i) analyze fundamental challenges and concerns for the development of V&V methods and techniques that provide certifiable trust in self-adaptive and self-managing systems; and (ii) present a proposal for including V&V operations explicitly in feedback loops for ensuring the achievement of software self-adaptation goals. Both of these contributions provide valuable starting points for V&V researchers to help advance this field

    Engineering Trustworthy Self-Adaptive Software with Dynamic Assurance Cases

    Get PDF
    Building on concepts drawn from control theory, self-adaptive software handles environmental and internal uncertainties by dynamically adjusting its architecture and parameters in response to events such as workload changes and component failures. Self-adaptive software is increasingly expected to meet strict functional and non-functional requirements in applications from areas as diverse as manufacturing, healthcare and finance. To address this need, we introduce a methodology for the systematic ENgineering of TRUstworthy Self-adaptive sofTware (ENTRUST). ENTRUST uses a combination of (1) design-time and runtime modelling and verification, and (2) industry-adopted assurance processes to develop trustworthy self-adaptive software and assurance cases arguing the suitability of the software for its intended application. To evaluate the effectiveness of our methodology, we present a tool-supported instance of ENTRUST and its use to develop proof-of-concept self-adaptive software for embedded and service-based systems from the oceanic monitoring and e-finance domains, respectively. The experimental results show that ENTRUST can be used to engineer self-adaptive software systems in different application domains and to generate dynamic assurance cases for these systems

    What Characterizes Safety of Ambient Assisted Living Technologies?

    Get PDF
    Ambient assisted living (AAL) technologies aim at increasing an individual's safety at home by early recognizing risks or events that might otherwise harm the individual. A clear definition of safety in the context of AAL is still missing and facets of safety still have to be shaped. The objective of this paper is to characterize the facets of AAL-related safety, to identify opportunities and challenges of AAL regarding safety and to identify open research issues in this context. Papers reporting aspects of AAL-related safety were selected in a literature search. Out of 395 citations retrieved, 28 studies were included in the current review. Two main facets of safety were identified: user safety and system safety. System safety concerns an AAL system's reliability, correctness and data quality. User safety reflects impact on physical and mental health of an individual. Privacy, data safety and security issues, sensor quality and integration of sensor data, as well as technical failures of sensors and systems are reported challenges. To conclude, there is a research gap regarding methods and metrics for measuring user and system safety in the context of AAL technologies

    A Paradigm for Safe Adaptation of Collaborating Robots

    Get PDF
    The dynamic forces that transit back and forth traditional boundaries of system development have led to the emergence of digital ecosystems. Within these, business gains are achieved through the development of intelligent control that requires a continuous design and runtime co-engineering process endangered by malicious attacks. The possibility of inserting specially crafted faults capable to exploit the nature of unknown evolving intelligent behavior raises the necessity of malicious behavior detection at runtime.Adjusting to the needs and opportunities of fast AI development within digital ecosystems, in this paper, we envision a novel method and framework for runtime predictive evaluation of intelligent robots' behavior for assuring a cooperative safe adjustment

    Model-Based Engineering of Collaborative Embedded Systems

    Get PDF
    This Open Access book presents the results of the "Collaborative Embedded Systems" (CrESt) project, aimed at adapting and complementing the methodology underlying modeling techniques developed to cope with the challenges of the dynamic structures of collaborative embedded systems (CESs) based on the SPES development methodology. In order to manage the high complexity of the individual systems and the dynamically formed interaction structures at runtime, advanced and powerful development methods are required that extend the current state of the art in the development of embedded systems and cyber-physical systems. The methodological contributions of the project support the effective and efficient development of CESs in dynamic and uncertain contexts, with special emphasis on the reliability and variability of individual systems and the creation of networks of such systems at runtime. The project was funded by the German Federal Ministry of Education and Research (BMBF), and the case studies are therefore selected from areas that are highly relevant for Germany’s economy (automotive, industrial production, power generation, and robotics). It also supports the digitalization of complex and transformable industrial plants in the context of the German government's "Industry 4.0" initiative, and the project results provide a solid foundation for implementing the German government's high-tech strategy "Innovations for Germany" in the coming years

    Considerations in Assuring Safety of Increasingly Autonomous Systems

    Get PDF
    Recent technological advances have accelerated the development and application of increasingly autonomous (IA) systems in civil and military aviation. IA systems can provide automation of complex mission tasks-ranging across reduced crew operations, air-traffic management, and unmanned, autonomous aircraft-with most applications calling for collaboration and teaming among humans and IA agents. IA systems are expected to provide benefits in terms of safety, reliability, efficiency, affordability, and previously unattainable mission capability. There is also a potential for improving safety by removal of human errors. There are, however, several challenges in the safety assurance of these systems due to the highly adaptive and non-deterministic behavior of these systems, and vulnerabilities due to potential divergence of airplane state awareness between the IA system and humans. These systems must deal with external sensors and actuators, and they must respond in time commensurate with the activities of the system in its environment. One of the main challenges is that safety assurance, currently relying upon authority transfer from an autonomous function to a human to mitigate safety concerns, will need to address their mitigation by automation in a collaborative dynamic context. These challenges have a fundamental, multidimensional impact on the safety assurance methods, system architecture, and V&V capabilities to be employed. The goal of this report is to identify relevant issues to be addressed in these areas, the potential gaps in the current safety assurance techniques, and critical questions that would need to be answered to assure safety of IA systems. We focus on a scenario of reduced crew operation when an IA system is employed which reduces, changes or eliminates a human's role in transition from two-pilot operations
    • …
    corecore