266,408 research outputs found

    Relational specification as a testing oracle

    Get PDF
    Software engineering community is well aware of the usefulness of formal methods for specifying, designing and testing of the software. Despite this testing literature rarely deals with specification based testing. Testing from formal specifications offers a simple, structured and more rigorous approach to the functional tests than testing techniques. An important application of specification in testing is providing test oracles. The rise of use of computers in control safety critical systems, i.e., flight control systems, necessitates that rigorous system testing is performed before the deployment. In flight control systems, requirements are mostly concerned with the safety and maneuverability of an aircraft. In this domain, the use of formal approaches to requirements specification and system verification is strongly encouraged. In our study relational notation was used to model the requirements of generic flight control system. The advantage of relational approach is that the requirements can be partitioned into less complex components. Each component is separately specified with a set a relations. The formal aspect of the relational notation is exploited in a verification framework where the specifications are used as an oracle to test a system implementation

    Extended Finite State Machine based test generation for an OpenFlow switch

    Get PDF
    Implementations of an OpenFlow (OF) switch, a crucial Software Defined Networking (SDN) component, are prone to errors caused by developer mistakes or/and ambiguous requirements stated in the OF documents. The paper is devoted to test derivation for related OF switch implementations. A model based test generation strategy is proposed. It relies on an Extended Finite State Machine (EFSM) specification that describes the functional behaviour of the switch-to-controller communication while potential faults/misconfigurations are expressed via corresponding mutation operators. We propose a method for deriving a test suite that contains distinguishing sequences for the specification EFSM and corresponding mutants. The proposed approach is implemented as a testbed to automatically derive and execute the test suites against different versions of an OF implementation. Preliminary experimental evaluation has shown the effectiveness of the proposed approach. Further on, the derived test suites have been able to detect a number of functional inconsistencies such as erroneous responses to the Flow Mod adding rules with specific 'action' values in an available Open vSwitch implementatio

    Augmenting High-Level Petri Nets to Support GALS Distributed Embedded Systems Specification

    Get PDF
    Part 9: Embedded Systems and Petri NetsInternational audienceHigh-level Petri net classes are suited to specify concurrent processes with emphasis both in control and data processing, making them appropriate to specify distributed embedded systems (DES). Embedded systems components are usually synchronous, which means that DES can be seen as Globally-Asynchronous Locally-Synchronous (GALS) systems. This paper proposes to include in high-level Petri nets a set of concepts already introduced for low-level Petri nets allowing the specification of GALS systems, namely time domains, test arcs and priorities. Additionally, this paper proposes external messages and three types of (high-level) asynchronous communication channels, to specify the interaction between distributed components based on message exchange. With these extensions, GALS-DES can be specified using high-level Petri nets. The resulting models include the specification of each component with well-defined boundaries and interface, and also the explicit specification of the asynchronous interaction between components. These models will be used not only to specify the system behavior, but also to be the input for model-checking tools (supporting its verification) and automatic code generation tools (supporting its implementation in software and hardware platforms), giving a contribution to the model-based development approach and hardware-software co-design of DES based on high-level Petri nets

    Model-Based Testing for Composite Web Services in Cloud Brokerage Scenarios

    Get PDF
    Cloud brokerage is an enabling technology allowing various services to be merged together for providing optimum quality of service for the end-users. Within this collection of composed services, testing is a challenging task which brokers have to take on to ensure quality of service. Most Software-as-a-Service (SaaS) testing has focused on high-level test generation from the functional specification of individual services, with little research into how to achieve sufficient test coverage of composite services. This paper explores the use of model-based testing to achieve testing of composite services, when two individual web services are tested and combined. Two example web services – a login service and a simple shopping service – are combined to give a more realistic shopping cart service. This paper focuses on the test coverage required for testing the component services individually and their composition. The paper highlights the problems of service composition testing, requiring a reworking of the combined specification and regeneration of the tests, rather than a simple composition of the test suites; and concludes by arguing that more work needs to be done in this area

    Contract Testing for Reliable Embedded Systems

    Get PDF
    Embedded systems comprise diverse technologies complicating their design. By creating virtual prototypes of the target system, Electronic System Level Design, the early analysis of a system composed by electronics and software is possible. However, the concrete interaction between hardware modules and between hardware and software is left for late development stages and real prototype making. Generally, interaction between components is assumed to be correct. However, it has to be assumed on development implicitly because interaction between components is not considered in the functionality design. While single components are mostly thoroughly tested and guarantee certain reliability levels, their interaction is based on often underspecified interfaces. Although component usage is mostly specified, operational constraints are often left out. Finally, not only the interaction between components but also with the environment and the user are not ensured. Generally, only functional integration tests are executed and corner-cases are left out, leaving uncovered faults that only manifest as failures later when their cost is higher. Therefore, this work aims at component interaction through specification of interfaces, test generation and real-time test execution. The specification is based on the design-by-contract approach of software that specifies semantics of component interaction in addition to the syntactical definition through functions. In the first part of this work, a specification for the interaction between hardware modules is given. With the automatic real-time test execution, fulfillment of specified preconditions for correct component operation can be checked. In component-based design, the component is trusted and thus, its functionality is assumed to be correct when certain postconditions are specified. In a correct component assembly, component postconditions fulfill preconditions of other components resulting in an operational system. The specification of preconditions follows the definition of environmental properties, acceptable input sequences for interfacing pins, as well as acceptable signal parameters, such as voltage levels, slope times, delays and glitches. Postconditions are defined by the description of a functionality accompanying constraints, such as timing. These parameters are automatically determined on operation by a testing circuit. Parameters that violate the specification are signaled by the testing circuit and failure is detected. The chosen parameters can give hint of the reason for the failure being an evidence of a circuit fault. In the example of an Inter-Integrated Circuit (I2C) communication system, we define contracts and show comparisons between contract violation, fault categorization and failure occurrence under signal fault injection. To complete this work, support for fault analysis on the electronic system level design is given. For this, the data transfers between the high-level models used in the design are augmented with the defined contract parameters. With a specific interface, digital faults are generated for transactions with violating signal parameters that can be tracked by the system. This way, recovery mechanisms for synchronous communication are proposed and tested. In the second part, the interaction between hardware and software is tackled providing special methods for developing device drivers. For this, we do not only specify the interface between hardware and software but also map the hardware control elements to software, partially generating the software interface for a device. This is necessary because drivers handle devices with internal control elements like registers, data streams and interrupts that cannot be represented on software. This systematic composition of drivers facilitates the development of a device interface called the device mechanism. It is the lowest layer of a two-layer architecture for driver development. The device mechanism carries out the access to the device exporting a pure software interface. This interface is based on the device implementation being, thus, fully specified. Further data processing required for compliance with the operating system or application is carried out in the driver policy, the layer on top of it. With the definition of a software layer for device control, contracts specifying constraints of this interface are proposed. These contracts are based on implementation constraints of the device and on its dynamic behavior. Therefore, an extended finite state machine models the dynamic behavior of the device. Based on it, functions of the device mechanism can be augmented with preconditions on the state or on state machine variables. These conditions are then checked on runtime. After execution of a function, its postconditions are ensured, such as timing. This guarantees that different driver policies, operating systems or firmwares, use this same device mechanism fulfilling its constraints. On the example of a Philips webcam, we develop the complete driver for Linux based on our architecture, creating contracts for its device mechanism. Following the systematic composition and the contract approach, driver bugs are avoided that otherwise violate allowed values for device data and execution orders of device protocols

    Designing a Magnetic Measurement Data Acquisition and Control System with Reuse in Mind: A Rotating Coil System Example

    Full text link
    Accelerator magnet test facilities frequently need to measure different magnets on differently equipped test stands and with different instrumentation. Designing a modular and highly reusable system that combines flexibility built-in at the architectural level as well as on the component level addresses this need. Specification of the backbone of the system, with the interfaces and dataflow for software components and core hardware modules, serves as a basis for building such a system. The design process and implementation of an extensible magnetic measurement data acquisition and control system are described, including techniques for maximizing the reuse of software. The discussion is supported by showing the application of this methodology to constructing two dissimilar systems for rotating coil measurements, both based on the same architecture and sharing core hardware modules and many software components. The first system is for production testing 10 m long cryo-assemblies containing two MQXFA quadrupole magnets for the high-luminosity upgrade of the Large Hadron Collider and the second for testing IQC conventional quadrupole magnets in support of the accelerator system at Fermilab

    Formal and Fault Tolerant Design

    Get PDF
    Software quality and reliability were verified for a long time at the post-implementation level (test, fault sce-nario ...). The design of embedded systems and digital circuits is more and more complex because of inte-gration density, heterogeneity. Now almost ¾ of the digital circuits contain at least one processor, that is, can execute software code. In other words, co-design is the most usual case and traditional verification by simu-lation is no more practical. Moreover, the increase in integration density comes with a decrease in the reliabil-ity of the components. So fault detection, diagnostics techniques, introspection are essential for defect toler-ance, fault tolerance and self repair of safety-critical systems. The use of a formal specification language is considered as the foundation of a real validation. What we would like to emphasize is that refinement (from an abstract model to the point where the system will be implemented) could be and should be formal too in order to ensure the traceability of requirements, to man-age such development projects and so to design fault-tolerant systems correct by proven construction. Such a thorough approach can be achieved by the automation or semi-automation of the refinement process. We have studied how to ensure the traceability of these requirements in a component-based approach. Re-liability, fault tolerance can be seen here as particular refinement steps. For instance, a given formal specifi-cation of a system/component may be refined by adding redundancy (data, computation, component) and be verified to be fault-tolerant w.r.t. some given fault scenarios. A self-repair component can be defined as the refinement of its original form enhanced with error detection. We describe in this paper the PCSI project (Zero Defect Systems) based on B Method, VHDL and PSL. The three modeling approaches can collaborate together and guarantee the codesign of embedded systems for which the requirements and the fault-tolerant aspects are taken into account for the beginning and formally verified all along the implementation process

    REMM-Studio: an Integrated Model-Driven Environment for Requirements Specification, Validation and Formatting

    Get PDF
    In order to integrate requirements into the current Model-Driven Engineering (MDE) approach, the traditional document-based requirements specification process should be changed into a requirements modelling process. To achieve this we propose a requirements metamodel called REMM Requirements Engineering MetaModel) which includes the elements that should appear in a requirements model (requirements, stakeholders, test cases, etc.) together with the relationships that may appear between them. This metamodel is the basis of the REMM-Studio environment which enables: (1) to build graphical requirements models, (2) to validate them against the metamodel and against a set of additional OCL constraints, and (3) to automatically generate a navigable Software Requirements Specification (SRS) document as the main deliverable of the Requirements Engineering process. REMM-Studio is expected to ease the integration of requirements with other development models (e.g. component models) and to facilitate the validation of the SRS thanks to its navigability.MEDWSA (TIN2006-15175-C05-02), DEDALO (TIN2006-15175-C05-03), DESERT (PBC-05-012-3)Escuela Técnica superior de Ingeniería Agronómic
    • …
    corecore