150,663 research outputs found

    A NASA initiative: Software engineering for reliable complex systems

    Get PDF
    The objective is the development of methods, technology, and skills that will enable NASA to cost-effectively specify, build, and manage reliable software which can evolve and be maintained over an extended period. The need for such software is rooted in the increasing integration of software and computing components into NASA systems. Current NASA Software Engineering expertise was applied toward some of the largest reliable systems including: shuttle launch; ground support; shuttle simulation; minor control; satellite tracking; and scientific data systems. Unfortunately, no theory exists for reliable complex software systems. NASA is seeking to fill this theoretical gap through a number of approaches. One such approach is to conduct research on theoretical foundations for managing complex software systems. It includes: communication models, new and modified paradigms, and life-cycle models. Another approach is research in the theoretical foundations for reliable software development and validation. It focuses upon formal specifications, programming languages, software engineering systems, software reuse, formal verification, and software safety. Further approaches involve benchmarking a NASA software environment, experimentation within the NASA context, evolution of present NASA methodology, and transfer of technology to the space station software support environment

    Model Checking: Verification or Debugging?

    Get PDF

    Giving You back Control of Your Data: Digital Signing Practical Issues and the eCert Solution

    No full text
    As technologies develop rapidly, digital signing is commonly used in eDocument security. However, unaddressed issues exist. An eCertificate system represents the problem situation, and therefore is being used as case study, in a project called eCert, to research for the solution. This paper addresses these issues, explores the gap between current tools and the desired system, through analysis of the existing services and eCertificate use cases, and the identified requirements, thereby presenting an approach which solves the above problems. Preliminary results indicate that the recommendation from this research meets the design requirements, and could form the foundation of future study of solving digital signing issues

    Assessment of sensor performance

    Get PDF
    There is an international commitment to develop a comprehensive, coordinated and sustained ocean observation system. However, a foundation for any observing, monitoring or research effort is effective and reliable in situ sensor technologies that accurately measure key environmental parameters. Ultimately, the data used for modelling efforts, management decisions and rapid responses to ocean hazards are only as good as the instruments that collect them. There is also a compelling need to develop and incorporate new or novel technologies to improve all aspects of existing observing systems and meet various emerging challenges. Assessment of Sensor Performance was a cross-cutting issues session at the international OceanSensors08 workshop in WarnemĆ¼nde, Germany, which also has penetrated some of the papers published as a result of the workshop (Denuault, 2009; Krƶger et al., 2009; Zielinski et al., 2009). The discussions were focused on how best to classify and validate the instruments required for effective and reliable ocean observations and research. The following is a summary of the discussions and conclusions drawn from this workshop, which specifically addresses the characterisation of sensor systems, technology readiness levels, verification of sensor performance and quality management of sensor systems

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Assessment team report on flight-critical systems research at NASA Langley Research Center

    Get PDF
    The quality, coverage, and distribution of effort of the flight-critical systems research program at NASA Langley Research Center was assessed. Within the scope of the Assessment Team's review, the research program was found to be very sound. All tasks under the current research program were at least partially addressing the industry needs. General recommendations made were to expand the program resources to provide additional coverage of high priority industry needs, including operations and maintenance, and to focus the program on an actual hardware and software system that is under development
    • ā€¦
    corecore