8,643 research outputs found

    The natural history of bugs: using formal methods to analyse software related failures in space missions

    Get PDF
    Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 18million.Payloadsrangefrom18 million. Payloads range from 4 million up to 1billionforsecurityrelatedsatellites.Thesecostsdonotincludeconsequentbusinesslosses.In2005,Intelsatwroteoff1 billion for security related satellites. These costs do not include consequent business losses. In 2005, Intelsat wrote off 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions

    Proceedings of the 17th Dutch Testing Day:Testing Evolvability, November 29, 2011, University of Twente, Enschede

    Get PDF

    Specification and use of component failure patterns

    Get PDF
    Safety-critical systems are typically assessed for their adherence to specified safety properties. They are studied down to the component-level to identify root causes of any hazardous failures. Most recent work with model-based safety analysis has focused on improving system modelling techniques and the algorithms used for automatic analyses of failure models. However, few developments have been made to improve the scope of reusable analysis elements within these techniques. The failure behaviour of components in these techniques is typically specified in such a way that limits the applicability of such specifications across applications. The thesis argues that allowing more general expressions of failure behaviour, identifiable patterns of failure behaviour for use within safety analyses could be specified and reused across systems and applications where the conditions that allow such reuse are present.This thesis presents a novel Generalised Failure Language (GFL) for the specification and use of component failure patterns. Current model-based safety analysis methods are investigated to examine the scope and the limits of achievable reuse within their analyses. One method, HiP-HOPS, is extended to demonstrate the application of GFL and the use of component failure patterns in the context of automated safety analysis. A managed approach to performing reuse is developed alongside the GFL to create a method for more concise and efficient safety analysis. The method is then applied to a simplified fuel supply and a vehicle braking system, as well as on a set of legacy models that have previously been analysed using classical HiP-HOPS. The proposed GFL method is finally compared against the classical HiP-HOPS, and in the light of this study the benefits and limitations of this approach are discussed in the conclusions

    Specification: The Biggest Bottleneck in Formal Methods and Autonomy

    Get PDF
    Advancement of AI-enhanced control in autonomous systems stands on the shoulders of formal methods, which make possible the rigorous safety analysis autonomous systems require. An aircraft cannot operate autonomously unless it has design-time reasoning to ensure correct operation of the autopilot and runtime reasoning to ensure system health management, or the ability to detect and respond to off-nominal situations. Formal methods are highly dependent on the specifications over which they reason; there is no escaping the “garbage in, garbage out” reality. Specification is difficult, unglamorous, and arguably the biggest bottleneck facing verification and validation of aerospace, and other, autonomous systems. This VSTTE invited talk and paper examines the outlook for the practice of formal specification, and highlights the on-going challenges of specification, from design-time to runtime system health management. We exemplify these challenges for specifications in Linear Temporal Logic (LTL) though the focus is not limited to that specification language. We pose challenge questions for specification that will shape both the future of formal methods, and our ability to more automatically verify and validate autonomous systems of greater variety and scale. We call for further research into LTL Genesis

    SoC regression strategy developement

    Get PDF
    Abstract. The objective of the verifcation process of hardware is ensuring that the design does not contain any functional errors. Verifying the correct functionality of a large System-on-Chip (SoC) is a co-design process that is performed by running immature software on immature hardware. Among the key objectives is to ensure the completion of the design before proceeding to fabrication. Verification is performed using a mix of software simulations that imitate the hardware functions and emulations executed on reconfigurable hardware. Both techniques are time-consuming, the software running perhaps at a billionth and the emulation at thousands of times slower than the targeted system. A good verification strategy reduces the time to market without compromising the testing coverage. This thesis compares regression verification strategies for a large SoC project. These include different techniques of test case selection, test case prioritization that have been researched in software projects. There is no single strategy that performs well in SoC throughout the whole development cycle. In the early stages of development time based test case prioritization provides the fastest convergence. Later history based test case prioritization and risk based test case selection gave a good balance between coverage, error detection, execution time, and foundations to predict the time to completion
    • 

    corecore