3,020 research outputs found

    Stochastic modeling, analysis and verification of mission-critical systems and processes

    Get PDF
    Software and business processes used in mission-critical defence applications are often characterised by stochastic behaviour. The causes for this behaviour range from unanticipated environmental changes and built-in random delays to component and communication protocol unreliability. This paper overviews the use of a stochastic modelling and analysis technique called quantitative verication to establish whether mission-critical software and business processes meet their reliability, performance and other quality-of-service requirements

    On Managing Knowledge for MAPE-K Loops in Self-Adaptive Robotics Using a Graph-Based Runtime Model

    Get PDF
    Service robotics involves the design of robots that work in a dynamic and very open environment, usually shared with people. In this scenario, it is very difficult for decision-making processes to be completely closed at design time, and it is necessary to define a certain variability that will be closed at runtime. MAPE-K (Monitor–Analyze–Plan–Execute over a shared Knowledge) loops are a very popular scheme to address this real-time self-adaptation. As stated in their own definition, they include monitoring, analysis, planning, and execution modules, which interact through a knowledge model. As the problems to be solved by the robot can be very complex, it may be necessary for several MAPE loops to coexist simultaneously in the robotic software architecture endowed in the robot. The loops will then need to be coordinated, for which they can use the knowledge model, a representation that will include information about the environment and the robot, but also about the actions being executed. This paper describes the use of a graph-based representation, the Deep State Representation (DSR), as the knowledge component of the MAPE-K scheme applied in robotics. The DSR manages perceptions and actions, and allows for inter- and intra-coordination of MAPE-K loops. The graph is updated at runtime, representing symbolic and geometric information. The scheme has been successfully applied in a retail intralogistics scenario, where a pallet truck robot has to manage roll containers for satisfying requests from human pickers working in the warehousePartial funding for open access charge: Universidad de Málaga. This work has been partially developed within SA3IR (an experiment funded by EU H2020 ESMERA Project under Grant Agreement 780265), the project RTI2018-099522-B-C4X, funded by the Gobierno de España and FEDER funds, and the B1-2021_26 project, funded by the University of Málaga

    Tree Memory Networks for Modelling Long-term Temporal Dependencies

    Full text link
    In the domain of sequence modelling, Recurrent Neural Networks (RNN) have been capable of achieving impressive results in a variety of application areas including visual question answering, part-of-speech tagging and machine translation. However this success in modelling short term dependencies has not successfully transitioned to application areas such as trajectory prediction, which require capturing both short term and long term relationships. In this paper, we propose a Tree Memory Network (TMN) for modelling long term and short term relationships in sequence-to-sequence mapping problems. The proposed network architecture is composed of an input module, controller and a memory module. In contrast to related literature, which models the memory as a sequence of historical states, we model the memory as a recursive tree structure. This structure more effectively captures temporal dependencies across both short term and long term sequences using its hierarchical structure. We demonstrate the effectiveness and flexibility of the proposed TMN in two practical problems, aircraft trajectory modelling and pedestrian trajectory modelling in a surveillance setting, and in both cases we outperform the current state-of-the-art. Furthermore, we perform an in depth analysis on the evolution of the memory module content over time and provide visual evidence on how the proposed TMN is able to map both long term and short term relationships efficiently via a hierarchical structure

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    ACon: A learning-based approach to deal with uncertainty in contextual requirements at runtime

    Get PDF
    Context: Runtime uncertainty such as unpredictable operational environment and failure of sensors that gather environmental data is a well-known challenge for adaptive systems. Objective: To execute requirements that depend on context correctly, the system needs up-to-date knowledge about the context relevant to such requirements. Techniques to cope with uncertainty in contextual requirements are currently underrepresented. In this paper we present ACon (Adaptation of Contextual requirements), a data-mining approach to deal with runtime uncertainty affecting contextual requirements. Method: ACon uses feedback loops to maintain up-to-date knowledge about contextual requirements based on current context information in which contextual requirements are valid at runtime. Upon detecting that contextual requirements are affected by runtime uncertainty, ACon analyses and mines contextual data, to (re-)operationalize context and therefore update the information about contextual requirements. Results: We evaluate ACon in an empirical study of an activity scheduling system used by a crew of 4 rowers in a wild and unpredictable environment using a complex monitoring infrastructure. Our study focused on evaluating the data mining part of ACon and analysed the sensor data collected onboard from 46 sensors and 90,748 measurements per sensor. Conclusion: ACon is an important step in dealing with uncertainty affecting contextual requirements at runtime while considering end-user interaction. ACon supports systems in analysing the environment to adapt contextual requirements and complements existing requirements monitoring approaches by keeping the requirements monitoring specification up-to-date. Consequently, it avoids manual analysis that is usually costly in today’s complex system environments.Peer ReviewedPostprint (author's final draft

    Towards the Development of a Simulator for Investigating the Impact of People Management Practices on Retail Performance

    Get PDF
    Often models for understanding the impact of management practices on retail performance are developed under the assumption of stability, equilibrium and linearity, whereas retail operations are considered in reality to be dynamic, non-linear and complex. Alternatively, discrete event and agent-based modelling are approaches that allow the development of simulation models of heterogeneous non-equilibrium systems for testing out different scenarios. When developing simulation models one has to abstract and simplify from the real world, which means that one has to try and capture the 'essence' of the system required for developing a representation of the mechanisms that drive the progression in the real system. Simulation models can be developed at different levels of abstraction. To know the appropriate level of abstraction for a specific application is often more of an art than a science. We have developed a retail branch simulation model to investigate which level of model accuracy is required for such a model to obtain meaningful results for practitioners.Comment: 24 pages, 7 figures, 6 tables, Journal of Simulation 201

    Designing Trustworthy Autonomous Systems

    Get PDF
    The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Event-driven Temporal Models for Explanations - ETeMoX: Explaining Reinforcement Learning

    Get PDF
    Modern software systems are increasingly expected to show higher degrees of autonomy and self-management to cope with uncertain and diverse situations. As a consequence, autonomous systems can exhibit unexpected and surprising behaviours. This is exacerbated due to the ubiquity and complexity of Artificial Intelligence (AI)-based systems. This is the case of Reinforcement Learning (RL), where autonomous agents learn through trial-and-error how to find good solutions to a problem. Thus, the underlying decision-making criteria may become opaque to users that interact with the system and who may require explanations about the system’s reasoning. Available work for eXplainable Reinforcement Learning (XRL) offers different trade-offs: e.g. for runtime explanations, the approaches are model-specific or can only analyse results after-the-fact. Different from these approaches, this paper aims to provide an online model-agnostic approach for XRL towards trustworthy and understandable AI. We present ETeMoX, an architecture based on temporal models to keep track of the decision-making processes of RL systems. In cases where the resources are limited (e.g. storage capacity or time to response), the architecture also integrates complex event processing, an event-driven approach, for detecting matches to event patterns that need to be stored, instead of keeping the entire history. The approach is applied to a mobile communications case study that uses RL for its decision-making. In order to test the generalisability of our approach, three variants of the underlying RL algorithms are used: Q-Learning, SARSA and DQN. The encouraging results show that using the proposed configurable architecture, RL developers are able to obtain explanations about the evolution of a metric, relationships between metrics, and were able to track situations of interest happening over time windows
    • …
    corecore