314,885 research outputs found
Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control
Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems
Using qualitative models for safety analysis of industrial automation systems
Nowadays software enables to control more complex processes, but at the same time it is also responsible for the welfare of humans and environment. A failure in a software program can influence the technical process with unforeseeable consequences. Generally the safety of a computer-controlled system depends on a complex interaction between technical process, controller software and human task. Classic methods for safety analysis mostly are specialized to consider one part of the system and the analysis is a brainstorming procedure. In this paper a model-based approach for safety analysis is discussed. All parts of the computer controlled systems are first described with the help of a qualitative modeling. Then the different qualitative models are combined to a unique model of a computer-controlled system. Based on this model a computer supported safety analysis can be realized. The model enables the analysis of the interaction between the system parts even by considering any multiple failure
Computer-aided safety analysis of computer-controlled systems : a case example
Computer controlled systems consist of a complex interaction between technical process, human task and software. For the development of safety critical systems new method are required, which not only consider one of these parts of a computer-controlled system. In this paper a qualitative modeling method is presented. The method is called SQMA, Situationbased Qualitative Modeling and Analysis and it origin goes back to Qualitative Reasoning. First, all parts of a system are modeled separated and then combined to a unique model of a computer-controlled system. With this qualitative model a computer supported hazard analysis can be realised
Architecture framework for software safety
Currently, an increasing number of systems are controlled by soft- ware and rely on the correct operation of software. In this context, a safety- critical system is defined as a system in which malfunctioning software could result in death, injury or damage to environment. To mitigate these serious risks, the architecture of safety-critical systems needs to be carefully designed and analyzed. A common practice for modeling software architecture is the adoption of software architecture viewpoints to model the architecture for par- ticular stakeholders and concerns. Existing architecture viewpoints tend to be general purpose and do not explicitly focus on safety concerns in particular. To provide a complementary and dedicated support for designing safety critical systems, we propose an architecture framework for software safety. The archi- tecture framework is based on a metamodel that has been developed after a tho- rough domain analysis. The framework includes three coherent viewpoints, each of which addressing an important concern. The application of the view- points is illustrated for an industrial case of safety-critical avionics control computer system. © Springer International Publishing Switzerland 2014
Simulation and Flight Test Capability for Testing Prototype Sense and Avoid System Elements
NASA Langley Research Center (LaRC) and The MITRE Corporation (MITRE) have developed, and successfully demonstrated, an integrated simulation-to-flight capability for evaluating sense and avoid (SAA) system elements. This integrated capability consists of a MITRE developed fast-time computer simulation for evaluating SAA algorithms, and a NASA LaRC surrogate unmanned aircraft system (UAS) equipped to support hardware and software in-the-loop evaluation of SAA system elements (e.g., algorithms, sensors, architecture, communications, autonomous systems), concepts, and procedures. The fast-time computer simulation subjects algorithms to simulated flight encounters/ conditions and generates a fitness report that records strengths, weaknesses, and overall performance. Reviewed algorithms (and their fitness report) are then transferred to NASA LaRC where additional (joint) airworthiness evaluations are performed on the candidate SAA system-element configurations, concepts, and/or procedures of interest; software and hardware components are integrated into the Surrogate UAS research systems; and flight safety and mission planning activities are completed. Onboard the Surrogate UAS, candidate SAA system element configurations, concepts, and/or procedures are subjected to flight evaluations and in-flight performance is monitored. The Surrogate UAS, which can be controlled remotely via generic Ground Station uplink or automatically via onboard systems, operates with a NASA Safety Pilot/Pilot in Command onboard to permit safe operations in mixed airspace with manned aircraft. An end-to-end demonstration of a typical application of the capability was performed in non-exclusionary airspace in October 2011; additional research, development, flight testing, and evaluation efforts using this integrated capability are planned throughout fiscal year 2012 and 2013
Redundant Flight Control System for BVLOS UAV Operations
The Redundant Flight Computer (RFC) project focuses on enhancing the reliability and safety of small Unmanned Aircraft Systems (sUAS) by creating a redundant flight control system. The proposed system would serve as a “back-up” to the primary flight computer in the case of an in-flight loss of communications or control. The RFC project is part of a NASA-supported research initiative to enhance the safety of sUAS flying in the national airspace system, and allow the FAA to reconsider beyond visual line of site (BVLOS) sUAS operations.
A secondary goal of this project will be the development of an efficient and low cost variable-speed for propeller for sUAS integration. The use of variable pitch propellers in larger aircraft has proven to be an effective tool for increasing endurance, range and efficiency.
Ground testing and flight testing the RFC will verify systems reliability, and also the simulation of hardware and software failures to test the system’s resiliency to failures. It will also test telemetry feedback to the operators when notifying the operator of a failure, and also verify efficiency gains with the Pixhawk-controlled variable-pitch propulsion system.
Our current results have proven we can use the backup Pixhawk to take over from the primary Pixhawk via a kill-switch controlled by the backup Pixhawk during ground testing. Currently the test airframe is being built and flight testing is slated for October 31st to prove the system works in flight
A research program in empirical computer science
During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia
Recommended from our members
Safety and security issues in developing and operating in intelligent transportation systems
The purpose of this panel is to introduce the safety and security issues related to the development and operation of Intelligent Transportation Systems (ITS) to Compass participants. Many of these issues need to be addressed by the system safety and computer security communities prior to the development and deployment of ITS. For example, how can information technology be applied in the context of a fully automated highway system (AHS) such that the safety, security, and performance of the system are not compromised? At present, the US and other countries are funding academia and industry to build prototype automated highway systems in which vehicles are controlled via drive-by-wire technology, with vehicles traveling at high speeds (in excess of 30 m/s) at close spacing (1 to 4 m). The potential impact of software errors or hardware errors on system safety and security are great
Automation by PC interface of a multicusp volume ion source, Denise
DENISE is an acronym for Deuterium Negative Ion Source Experiment and originated in FOM the Institute for Atomic and Molecular Physics in DCU where it is being recommissioned as a test bed for the production and extraction of negative hydrogen ion for use in proposed nuclear fusion reactors. These reactors require the neutralisation of particle beams of up to hundreds of amps and energies of about IMeV for use in Neutral Beam Injection (N.B.I.).
The objective of this project was to automate the multicusp volume ion source called DENISE.
Automation is the technology concerned with the application of mechanical, electronic, and computer-based systems in the operation and control of production. This process of technological development lends real-time monitoring and quick error detection and correction to parameters and components that need to be controlled. Another feature is the ease of use and the attraction of the safety features that are inherent in this technology.
The parameters and components that are to be controlled are decided upon and the methods in which this is to be achieved discussed. The control system consists of a software/hardware interface to the pressure system, the cooling system and the pumping system. The Windows software monitors and displays the status of the physical system. An optoisolated electronic interface circuit allows control of the physical processes
A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software
Context: Today's safety critical systems are increasingly reliant on
software. Software becomes responsible for most of the critical functions of
systems. Many different safety analysis techniques have been developed to
identify hazards of systems. FTA and FMEA are most commonly used by safety
analysts. Recently, STPA has been proposed with the goal to better cope with
complex systems including software. Objective: This research aimed at comparing
quantitatively these three safety analysis techniques with regard to their
effectiveness, applicability, understandability, ease of use and efficiency in
identifying software safety requirements at the system level. Method: We
conducted a controlled experiment with 21 master and bachelor students applying
these three techniques to three safety-critical systems: train door control,
anti-lock braking and traffic collision and avoidance. Results: The results
showed that there is no statistically significant difference between these
techniques in terms of applicability, understandability and ease of use, but a
significant difference in terms of effectiveness and efficiency is obtained.
Conclusion: We conclude that STPA seems to be an effective method to identify
software safety requirements at the system level. In particular, STPA addresses
more different software safety requirements than the traditional techniques FTA
and FMEA, but STPA needs more time to carry out by safety analysts with little
or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International
Conference on Evaluation and Assessment in Software Engineering (EASE '15).
ACM, 201
- …