309,509 research outputs found

    Test Case Generation for a Level Crossing Controller

    Get PDF
    Formal methods (FM) can be used for the precise specification, property-ensuring development and exhaustive property verification of systems. Thus they are especially suited for highly safety or mission critical applications. Railway signaling systems clearly belong to these applications, and there are indeed several industrial projects where FM have been successfully applied; especially to core interlocking and communication-based train control (CBTC) systems. But despite their potential, FM are not very wide-spread in the sector. Work Package 5 of the X2Rail-2 project seeks to foster the use of FM in railway signaling by providing an introduction and overview of formal methods and demonstrating their use and benefit. For the latter, four different formal and one classical development methods are applied by different project partners to a level crossing (LX) controller specified by the Swedish railway infrastructure manager Trafikverket. For all of these developments, the safety properties from the LX specification are planned to be formally verified afterwards using the High Level Language (HLL). Since that means proving them exhaustively, they are of less interest for testing. However, there are further non-safety functional requirements in the specification which remain for testing. The extended abstract at hand reports on an automatic test case generation (TCG) approach of a test suite testing these requirements. In fact, this approach is based on formal methods as well, since the test case generator applies symbolic execution and theorem solving techniques: given a behavioral model of the system under test (SUT), the former method finds feasible paths through the model, while the latter completes the test case by determining suitable test data. This way, the test design task is partly automated, ensures a structural coverage of the model and the modeling process usually leads to a high test suite quality. The different LX controller implementations are tested as black box systems, each one with the same generated test cases. In order to simplify the integration of the different implementations with the test environment, a common test interface has been drawn up

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Automatic instantiation of abstract tests on specific configurations for large critical control systems

    Full text link
    Computer-based control systems have grown in size, complexity, distribution and criticality. In this paper a methodology is presented to perform an abstract testing of such large control systems in an efficient way: an abstract test is specified directly from system functional requirements and has to be instantiated in more test runs to cover a specific configuration, comprising any number of control entities (sensors, actuators and logic processes). Such a process is usually performed by hand for each installation of the control system, requiring a considerable time effort and being an error prone verification activity. To automate a safe passage from abstract tests, related to the so called generic software application, to any specific installation, an algorithm is provided, starting from a reference architecture and a state-based behavioural model of the control software. The presented approach has been applied to a railway interlocking system, demonstrating its feasibility and effectiveness in several years of testing experience

    Validation of Ultrahigh Dependability for Software-Based Systems

    Get PDF
    Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software

    PEER Testbed Study on a Laboratory Building: Exercising Seismic Performance Assessment

    Get PDF
    From 2002 to 2004 (years five and six of a ten-year funding cycle), the PEER Center organized the majority of its research around six testbeds. Two buildings and two bridges, a campus, and a transportation network were selected as case studies to “exercise” the PEER performance-based earthquake engineering methodology. All projects involved interdisciplinary teams of researchers, each producing data to be used by other colleagues in their research. The testbeds demonstrated that it is possible to create the data necessary to populate the PEER performancebased framing equation, linking the hazard analysis, the structural analysis, the development of damage measures, loss analysis, and decision variables. This report describes one of the building testbeds—the UC Science Building. The project was chosen to focus attention on the consequences of losses of laboratory contents, particularly downtime. The UC Science testbed evaluated the earthquake hazard and the structural performance of a well-designed recently built reinforced concrete laboratory building using the OpenSees platform. Researchers conducted shake table tests on samples of critical laboratory contents in order to develop fragility curves used to analyze the probability of losses based on equipment failure. The UC Science testbed undertook an extreme case in performance assessment—linking performance of contents to operational failure. The research shows the interdependence of building structure, systems, and contents in performance assessment, and highlights where further research is needed. The Executive Summary provides a short description of the overall testbed research program, while the main body of the report includes summary chapters from individual researchers. More extensive research reports are cited in the reference section of each chapter

    Aging concrete structures: a review of mechanics and concepts

    Get PDF
    The safe and cost-efficient management of our built infrastructure is a challenging task considering the expected service life of at least 50 years. In spite of time-dependent changes in material properties, deterioration processes and changing demand by society, the structures need to satisfy many technical requirements related to serviceability, durability, sustainability and bearing capacity. This review paper summarizes the challenges associated with the safe design and maintenance of aging concrete structures and gives an overview of some concepts and approaches that are being developed to address these challenges

    Guidelines for data collection and monitoring for asset management of New Zealand road bridges

    Get PDF
    Publisher PD
    corecore