84,088 research outputs found

    Practical Experiences in using Model-Driven Engineering to Develop Trustworthy Computing Systems

    Get PDF
    In this paper, we describe how Motorola has deployed model-driven engineering in product development, in particular for the development of trustworthy and highly reliable telecommunications systems, and outline the benefits obtained. Model-driven engineering has dramatically increased both the quality and the reliability of software developed in our organization, as well as the productivity of our software engineers. Our experience demonstrates that model-driven engineering significantly improves the development process for trustworthy computing systems

    Opening the Software Engineering Toolbox for the Assessment of Trustworthy AI

    Get PDF
    Trustworthiness is a central requirement for the acceptance and success of human-centered artificial intelligence (AI). To deem an AI system as trustworthy, it is crucial to assess its behaviour and characteristics against a gold standard of Trustworthy AI, consisting of guidelines, requirements, or only expectations. While AI systems are highly complex, their implementations are still based on software. The software engineering community has a long established toolbox for the assessment of software systems, especially in the context of software testing. In this paper, we argue for the application of software engineering and testing practices for the assessment of trustworthy AI. We make the connection between the seven key requirements as defined by the European Commission’s AI high-level expert group and established procedures from software engineering and raise questions for future work.publishedVersio

    RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems

    Full text link
    Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU. However, practitioners lack actionable instructions to operationalise ethics during AI systems development. A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them. Furthermore, requirements engineering (RE), which is identified to foster trustworthiness in the AI development process from the early stages was observed to be absent in a lot of frameworks that support the development of ethical and trustworthy AI. This incongruous phrasing combined with a lack of concrete development practices makes trustworthy AI development harder. To address this concern, we formulated a comparison table for the terminology used and the coverage of the ethical AI principles in major ethical AI guidelines. We then examined the applicability of ethical AI development frameworks for performing effective RE during the development of trustworthy AI systems. A tertiary review and meta-analysis of literature discussing ethical AI frameworks revealed their limitations when developing trustworthy AI. Based on our findings, we propose recommendations to address such limitations during the development of trustworthy AI.Comment: Accepted at [TAS '23]{First International Symposium on Trustworthy Autonomous Systems

    Engineering Trustworthy Self-Adaptive Autonomous Systems

    Get PDF
    Autonomous Systems (AS) are becoming ubiquitous in our society. Some examples are autonomous vehicles, unmanned aerial vehicles (UAV), autonomous trading systems, self-managing Telecom networks and smart factories. Autonomous Systems are based on a continuous interaction with the environment in which they are deployed, and more often than not this environment can be dynamic and partially unknown. AS must be able to take decisions autonomously at run-time also in presence of uncertainty. Software is the main enabler of AS and it allows the AS to self-adapt in response to changes in the environment and to evolve, via the deployment of new features.Traditionally, software development techniques are based on a complete description at design time of how the system must behave in different environmental conditions. This is no longer effective since the system has to be able to explore and learn from the environment in which it is operating also after its deployment. Reinforcement learning (RL) algorithms discover policies that can lead AS to achieve their goals in a dynamic and unknown environment. The developer does not specify anymore how the system should act in each possible situation but rather the RL algorithm can achieve an optimal behaviour by trial and error. Once trained, the AS will be capable of taking decisions and performing actions autonomously while still learning from the environment. These systems are becoming increasingly powerful, yet this flexibility comes at a cost: the learned policy does not necessarily guarantee safety or the achievement of the goals.This thesis explores the problem of building trustworthy autonomous systems from different angles. Firstly, we have identified the state of the art and challenges of building autonomous systems, with a particular focus on autonomous vehicles. Then, we have analysed how current approaches of formal verification can provide assurances in a System of Systems scenario. Finally, we have proposed methods that combine formal verification with reinforcement learning agents to address two major challenges: how to trust that an autonomous system will be able to achieve its goals and how to ensure that the behaviour of AS is safe

    A Framework for Understanding, Prioritizing, and Applying Systems Security Engineering Processes, Activities, and Tasks

    Get PDF
    Current systems security practices lack an effective approach to prioritize and tailor systems security efforts to develop and field secure systems in challenging operational environments, which results in business and mission stakeholders becoming more susceptible to an array of disruptive events. This work informs Systems Engineers on recent developments in the field of system security engineering and provides a framework for more fully understanding the application of Systems Security Engineering (SSE) processes, activities, and tasks as described in the recently released National Institute of Standards and Technology (NIST) Special Publication 800-160. This SSE framework uniquely offers a repeatable and tailorable methodology that allows system developers to focus on high Return-on-Investment (RoI) SSE processes, activities, and tasks to more efficiently meet stakeholder protection needs and deliver trustworthy secure systems

    RICIS Software Engineering 90 Symposium: Aerospace Applications and Research Directions Proceedings Appendices

    Get PDF
    Papers presented at RICIS Software Engineering Symposium are compiled. The following subject areas are covered: flight critical software; management of real-time Ada; software reuse; megaprogramming software; Ada net; POSIX and Ada integration in the Space Station Freedom Program; and assessment of formal methods for trustworthy computer systems

    A formal component-based software engineering approach for developing trustworthy systems

    Get PDF
    Software systems are increasingly becoming ubiquitous, affecting the way we experience the world. Embedded software systems, especially those used in smart devices, have become an essential constituent of the technological infrastructure of modem societies. Such systems, in order to be trusted in society, must be proved to be trustworthy. Trustworthiness is a composite non-functional property that implies safety, timeliness, security, availability, and reliability. This thesis is a contribution to a rigorous development of systems in which trustworthiness property can be specified and formally verified. Developing trustworthy software systems that are complex and used by a large heterogenous population of users is a challenging task. The component-based software engineering (CBSE) paradigm can provide an effective solution to address these challenges. However, none of the current component-based approaches can be used as is, because all of them lack the essential requirements for constructing trustworthy systems. The three contributions made in this thesis are intended to add to the expressive power needed to raise CBSE practices to a rigorous level for constructing formally verifiable trustworthy systems. The first contribution of the thesis is a formal definition of the trustworthy component model. The trustworthiness quality attributes are introduced as first class structural elements. The behavior of a component is automatically generated as an extended timed automata. A model checking technique is used to verify the properties of trustworthiness. A composition theory that preserves the properties of trustworthiness in a composition is presented. Conventional software engineering development processes are not suitable either for developing component-based systems or for developing trustworthy systems. In order to develop a component-based trustworthy system, the development process must be reuse-oriented, component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle. The second contribution of the thesis is a software engineering process model that consists of several parallel tracks of activities including component development, component assessment, component reuse, and component-based system development. The central concern in all activities of this process is ensuring trustworthiness. The third and final contribution of the thesis is a development framework with a comprehensive set of tools supporting the spectrum of formal development activity from modeling to deployment. The proposed approach has been applied to several case studies in the domains of component-based development and safety-critical systems. The experience from the case studies confirms that the approach is suitable for developing large and complex trustworthy systems
    corecore