89 research outputs found

    Deriving behavioral specifications of industrial software components

    Get PDF

    Learning and testing the bounded retransmission protocol

    Get PDF
    Abstract Using a well-known industrial case study from the verification literature, the bounded retransmission protocol, we show how active learning can be used to establish the correctness of protocol implementation I relative to a given reference implementation R. Using active learning, we learn a model M R of reference implementation R, which serves as input for a model based testing tool that checks conformance of implementation I to M R . In addition, we also explore an alternative approach in which we learn a model M I of implementation I, which is compared to model M R using an equivalence checker. Our work uses a unique combination of software tools for model construction (Uppaal), active learning (LearnLib, Tomte), model-based testing (JTorX, TorXakis) and verification (CADP, MRMC). We show how these tools can be used for learning these models, analyzing the obtained results, and improving the learning performance

    On specification-based cyber-attack detection in smart grids

    Get PDF
    The transformation of power grids into intelligent cyber-physical systems brings numerous benefits, but also significantly increases the surface for cyber-attacks, demanding appropriate countermeasures. However, the development, validation, and testing of data-driven countermeasures against cyber-attacks, such as machine learning-based detection approaches, lack important data from real-world cyber incidents. Unlike attack data from real-world cyber incidents, infrastructure knowledge and standards are accessible through expert and domain knowledge. Our proposed approach uses domain knowledge to define the behavior of a smart grid under non-attack conditions and detect attack patterns and anomalies. Using a graph-based specification formalism, we combine cross-domain knowledge that enables the generation of whitelisting rules not only for statically defined protocol fields but also for communication flows and technical operation boundaries. Finally, we evaluate our specification-based intrusion detection system against various attack scenarios and assess detection quality and performance. In particular, we investigate a data manipulation attack in a future-orientated use case of an IEC 60870-based SCADA system that controls distributed energy resources in the distribution grid. Our approach can detect severe data manipulation attacks with high accuracy in a timely and reliable manner

    Conflict-Aware Active Automata Learning

    Get PDF
    Active automata learning algorithms cannot easily handle conflict in the observation data (different outputs observed for the same inputs). This inherent inability to recover after a conflict impairs their effective applicability in scenarios where noise is present or the system under learning is mutating. We propose the Conflict-Aware Active Automata Learning (C AL) framework to enable handling conflicting information during the learning process. The core idea is to consider the so-called observation tree as a first-class citizen in the learning process. Though this idea is explored in recent work, we take it to its full effect by enabling its use with any existing learner and minimizing the number of tests performed on the system under learning, specially in the face of conflicts. We evaluate C AL in a large set of benchmarks, covering over 30 different realistic targets, and over 18,000 different scenarios. The results of the evaluation show that C AL is a suitable alternative framework for closed-box learning that can better handle noise and mutations

    Logs and Models in Engineering Complex Embedded Production Software Systems

    Get PDF

    On Supervisor Synthesis via Active Automata Learning

    Get PDF
    Our society\u27s reliance on computer-controlled systems is rapidly growing. Such systems are found in various devices, ranging from simple light switches to safety-critical systems like autonomous vehicles. In the context of safety-critical systems, safety and correctness are of utmost importance. Faults and errors could have catastrophic consequences. Thus, there is a need for rigorous methodologies that help provide guarantees of safety and correctness. Supervisor synthesis, the concept of being able to mathematically synthesize a supervisor that ensures that the closed-loop system behaves in accordance with known requirements, can indeed help.This thesis introduces supervisor learning, an approach to help automate the learning of supervisors in the absence of plant models. Traditionally, supervisor synthesis makes use of plant models and specification models to obtain a supervisor. Industrial adoption of this method is limited due to, among other things, the difficulty in obtaining usable plant models. Manually creating these plant models is an error-prone and time-consuming process. Thus, supervisor learning intends to improve the industrial adoption of supervisory control by automating the process of generating supervisors in the absence of plant models.The idea here is to learn a supervisor for the system under learning (SUL) by active interaction and experimentation. To this end, we present two algorithms, SupL*, and MSL, that directly learn supervisors when provided with a simulator of the SUL and its corresponding specifications. SupL* is a language-based learner that learns one supervisor for the entire system. MSL, on the other hand, learns a modular supervisor, that is, several smaller supervisors, one for each specification. Additionally, a third algorithm, MPL, is introduced for learning a modular plant model.The approach is realized in the tool MIDES and has been used to learn supervisors in a virtual manufacturing setting for the Machine Buffer Machine example, as well as learning a model of the Lateral State Manager, a sub-component of a self-driving car. These case studies show the feasibility and applicability of the proposed approach, in addition to helping identify future directions for research
    corecore