4 research outputs found

    Engineering Trustworthy Self-Adaptive Autonomous Systems

    Get PDF
    Autonomous Systems (AS) are becoming ubiquitous in our society. Some examples are autonomous vehicles, unmanned aerial vehicles (UAV), autonomous trading systems, self-managing Telecom networks and smart factories. Autonomous Systems are based on a continuous interaction with the environment in which they are deployed, and more often than not this environment can be dynamic and partially unknown. AS must be able to take decisions autonomously at run-time also in presence of uncertainty. Software is the main enabler of AS and it allows the AS to self-adapt in response to changes in the environment and to evolve, via the deployment of new features.Traditionally, software development techniques are based on a complete description at design time of how the system must behave in different environmental conditions. This is no longer effective since the system has to be able to explore and learn from the environment in which it is operating also after its deployment. Reinforcement learning (RL) algorithms discover policies that can lead AS to achieve their goals in a dynamic and unknown environment. The developer does not specify anymore how the system should act in each possible situation but rather the RL algorithm can achieve an optimal behaviour by trial and error. Once trained, the AS will be capable of taking decisions and performing actions autonomously while still learning from the environment. These systems are becoming increasingly powerful, yet this flexibility comes at a cost: the learned policy does not necessarily guarantee safety or the achievement of the goals.This thesis explores the problem of building trustworthy autonomous systems from different angles. Firstly, we have identified the state of the art and challenges of building autonomous systems, with a particular focus on autonomous vehicles. Then, we have analysed how current approaches of formal verification can provide assurances in a System of Systems scenario. Finally, we have proposed methods that combine formal verification with reinforcement learning agents to address two major challenges: how to trust that an autonomous system will be able to achieve its goals and how to ensure that the behaviour of AS is safe

    Designing Trustworthy Autonomous Systems

    Get PDF
    The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct

    Formal verification of the on-the-fly vehicle platooning protocol

    No full text
    Future transportation systems are expected to be Systems of Systems (SoSs) composed of vehicles, pedestrians, roads, signs and other parts of the infrastructure. The boundaries of such systems change frequently and unpredictably and they have to cope with different degrees of uncertainty. At the same time, these systems are expected to function correctly and reliably. This is why designing for resilience is becoming extremely important for these systems. One example of SoS collaboration is the vehicle platooning, a promising concept that will help us dealing with traffic congestion in the near future. Before deploying such scenarios on real roads, vehicles must be guaranteed to act safely, hence their behaviour must be verified. In this paper, we describe a vehicle platooning protocol focusing especially on dynamic leader negotiation and message propagation. We have represented the vehicles behaviours with timed automata so that we are able to formally verifying the correctness through the use of model checking
    corecore