Journal of System Safety
Not a member yet
238 research outputs found
Sort by
From the Editor's Desk: AI Tools for System Safety Analysis?
It was not long ago that there were reports of students using AI applications to write assignments for school. Now it is common for engineers to use AI applications to help with engineering analyses and even to generate software. Some people are concerned as there are cases where AI makes up answers to questions (referred to as hallucinations). There are also situations when it is not clear how the model arrived at a particular answer. To be fair, I seem to remember some analyses generated by full blooded human beings that seemed to have made up answers and difficult to understand conclusions
Reduction of Normalization of Deviation (NoD) Using a Socio-Technical Systems Approach
Normalization of deviation (NoD), also known as normalization of deviance, is the process in which deviations from correct or proper decisions, behaviors, or conditions important for safety insidiously become the accepted norm over time. NoD is a common, risky, yet elusive issue causing or contributing to numerous accidents in multiple industries. Effective reduction of NoD is therefore a major opportunity. Approximately 10 years ago, Boeing developed a general systemic model of NoD based on a socio-technical systems approach. It is a representation of how multiple internal and external factors inherent to socio-technical systems interact in a dynamic fashion leading to NoD. It holistically captures the essence and complexity of the problem. The model has been shared across Boeing and with three customer airlines of Boeing. Specific systemic models of NoD associated with specific problems were developed based on the general systemic model. Subsequently, NoD awareness training, methods, tools, processes, and solutions based on those models have been developed. They were provided and/or used to improve workplace safety at Boeing and aviation safety at one of the three airlines. All the efforts have resulted in unprecedented insights, and some have seen significant reduction of NoD, NoD-related incidents, and NoD-related safety risks
In Memoriam: Dr. Loan (Joan) Pham
Our distinguished colleague, Dr. Loan (Joan) Pham, passed away suddenly on August 16, 2023. This is a tragic and overwhelming loss to all who knew Joan, but she and her work will be remembered throughout the fields of system safety and aviation safety
Proposing the Use of Hazard Analysis for Machine Learning Data Sets
There is no debating the importance of data for artificial intelligence. The behavior of data-driven machine learning models is determined by the data set, or as the old adage states: “garbage in, garbage out (GIGO).” While the machine learning community is still debating which techniques are necessary and sufficient to assess the adequacy of data sets, they agree some techniques are necessary. In general, most of the techniques being considered focus on evaluating the volumes of attributes. Those attributes are evaluated with respect to anticipated counts of attributes without considering the safety concerns associated with those attributes. This paper explores those techniques to identify instances of too little data and incorrect attributes. Those techniques are important; however, for safety critical applications, the assurance analyst also needs to understand the safety impact of not having specific attributes present in the machine learning data sets. To provide that information, this paper proposes a new technique the authors call data hazard analysis. The data hazard analysis provides an approach to qualitatively analyze the training data set to reduce the risk associated with the GIGO
TBD
I want to tell you a story about an encounter I had at a hotel bar in Lancaster California. I appreciate that at first it doesn’t appear to have anything to do with System Safety. Trust me, I think you will agree that perhaps there is an important lesson for us and the Society
The Difficulties with Replacing Crew Launch Abort Systems with Designed Reliability
As the space industry continues to innovate and new paradigms arise to challenge the status quo, human spaceflight is now perceived as safer and more accessible than ever before. This has led to a new line of thinking in which crewed launch vehicles should be reusable and reliable like commercial airplanes, forgoing the need for an abort system. This paper will counter that line of thought with an analysis of the spectrum of coverage historical crew abort systems provided during launch and use historical data from launch rate successes and failures to glean insight into what reliability in the human spaceflight industry can expect when designing the vehicles of the future. This historical launch vehicle reliability will then be compared to system safety standards used in the commercial aviation industry to understand if future designs truly need a crew abort system. Through this analysis, the rationale for why these crew abort systems have historically been used can be better understood
Incremental Assurance Through Eliminative Argumentation
An assurance case for a critical system is valid for that system at a particular point in time, such as when the system is delivered to a certification authority for review. The argument is structured around evidence that exists at that point in time. However, modern assurance cases are rarely one-off exercises. More information might become available (e.g., field data) that could strengthen (or weaken) the validity of the case. This paper proposes the notion of incremental assurance wherein the assurance case structure includes both the currently available evidence and a plan for incrementally increasing confidence in the system as additional or higher quality evidence becomes available. Such evidence is needed to further reduce doubts engineers or reviewers might have. This paper formalizes the idea of incremental assurance through an argumentation pattern. The concept of incremental assurance is demonstrated by applying the pattern to part of a safety assurance case for an air traffic control system
Review of the Latest Developments in Automotive Safety Standardization for Driving Automation Systems
The ISO 26262: Functional Safety – Road Vehicles Standard has been the de-facto automotive functional safety standard since it was first released in 2011. With the introduction of complex driving automation systems, new standardization efforts to deal with safety of these systems have been initiated to address emerging gaps such as the human/automation roles and responsibilities in the presence/absence of the driver/user, the impact of the technological limitations and the verification and validation needs of automation systems to name a few. This paper highlights some of these gaps and introduces some of the latest developments in automotive safety standardization for driving automation systems
System Safety Bookshelf: System Safety for the 21st Century, 2nd Edition
Over many decades System Safety has evolved from a more re-active nature - learning from failures and improving – not really suitable for high consequence enterprises - to today’s more pro-active form. This is now based on better fundamental understanding, better assessment processes, better standards, more comprehensive analysis tools with better audit and regulation procedures. However, unlike ‘set educational subjects’ such as engineering, science, technology and mathematics, there are less opportunities for formal System Safety education and training in academia and elsewhere, even though system safety impacts on all aspects of life. One hopes that this will continue to be rectified.
This leads us directly to the importance and value of this book, which gives a complete insight into the nature of what System Safety is all about, including its approaches, methodologies and tools, and which provides guidance on the successful application of a comprehensive, pro-active approach for ensuring safe system design