48,148 research outputs found

    A context for error: using conversation analysis to represent and analyse recorded voice data

    Get PDF
    Recorded voice data, such as from cockpit voice recorders (CVRs) or air traffic control tapes, can be an important source of evidence for accident investigation, as well as for human factors research. During accident investigations, the extent of analysis of these recordings depends on the nature and severity of the accident. However, most of the analysis has been based on subjective interpretation rather than the use of systematic methods, particularly when dealing with the analysis of crew interactions. This paper presents a methodology, called conversation analysis, which involves the detailed examination of interaction as it develops moment-to-moment between the participants, in context. Conversation analysis uses highly detailed and revealing transcriptions of recorded voice (or video) data that can allow deeper analyses of how people interact. The paper uses conversation analysis as a technique to examine CVR data from an accident flight. The focus accident was a controlled flight into terrain event involving an Israel Aircraft Industries Westwind 1124 jet aircraft, which impacted terrain near Alice Springs on 27 April 1995. The conversation analysis methodology provided a structured means for analysing the crew’s interaction. The error that contributed directly to the accident, an incorrectly set minimum descent altitude, can be seen as not the responsibility of one pilot, but at least in part as the outcome of the way the two pilots communicated with one another. The analysis considered the following aspects in particular: the significance of overlapping talk (when both pilots spoke at the same time); the copilot’s silence after talk from the pilot in command; instances when the pilot in command corrected (repaired) the copilot’s talk or conduct; and lastly, a range of aspects for how the two pilots communicated to perform routine tasks. In summary, the conversation analysis methodology showed how specific processes of interaction between crew members helped to create a working environment conducive to making, and not detecting, an error. By not interacting to work together as a team, pilots can create a context for error. When analysing recorded voice data, and especially for understanding instances of human error, often a great deal rests on investigators’ or analysts’ interpretations of what a pilot said, or what was meant by what was said, or how talk was understood, or how the mood in the cockpit or the pilots’ working relationship could best be described. Conversation analysis can be a tool for making such interpretations.This report was commisioned by Australian Transport Safety Burea

    Improving practice : child protection as a systems problem

    Get PDF
    This paper argues for treating the task of improving the child protection services as a systems problem, and for adopting the system-focused approach to investigating errors that has been developed in areas of medicine and engineering where safety is a high priority. It outlines how this approach differs from the traditional way of examining errors and how it leads to different types of solutions. Traditional inquiries tend to stop once human error has been found whereas a systems approach treats human error as the starting point and examines the whole context in which the operator was working to see how this impacted on their ability to perform well. The article outlines some factors that seem particularly problematic and worthy of closer analysis in current child protection services. A better understanding of the factors that are adversely effecting practitioners’ level of performance offers the potential for identifying more effective solutions. These typically take the form of modifying the tasks so that they make more realistic and feasible demands on human cognitive and emotional abilities

    The natural history of bugs: using formal methods to analyse software related failures in space missions

    Get PDF
    Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 18million.Payloadsrangefrom18 million. Payloads range from 4 million up to 1billionforsecurityrelatedsatellites.Thesecostsdonotincludeconsequentbusinesslosses.In2005,Intelsatwroteoff1 billion for security related satellites. These costs do not include consequent business losses. In 2005, Intelsat wrote off 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions

    Computerized Aircraft Accident Investigation: Federal Aviation Administration Aviation Safety Inspectors\u27 Perceptions

    Get PDF
    The purpose of this study was to solicit the perceptions of Federal Aviation Administration (FAA) Aviation Safety Inspectors (ASIs) on the use of a personal computer in the aircraft accident investigation process. A descriptive study survey questionnaire was used to collect the data for the study, which was sent to 150 FAA ASIs. The data collected supported the hypothesis that aircraft accident investigators think the use of a computer will help them with accident report form completion, managing the accident data collected, and in determining the factors contributing to an accident. Furthermore, the data supported the hypothesis that the use of a computer would make the overall process of aircraft accident investigation more efficient

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents

    The importance of understanding computer analyses in civil engineering

    Get PDF
    Sophisticated computer modelling systems are widely used in civil engineering analysis. This paper takes examples from structural engineering, environmental engineering, flood management and geotechnical engineering to illustrate the need for civil engineers to be competent in the use of computer tools. An understanding of a model's scientific basis, appropriateness, numerical limitations, validation, verification and propagation of uncertainty is required before applying its results. A review of education and training is also suggested to ensure engineers are competent at using computer modelling systems, particularly in the context of risk management. 1. Introductio

    Cognitive modeling of social behaviors

    Get PDF
    To understand both individual cognition and collective activity, perhaps the greatest opportunity today is to integrate the cognitive modeling approach (which stresses how beliefs are formed and drive behavior) with social studies (which stress how relationships and informal practices drive behavior). The crucial insight is that norms are conceptualized in the individual mind as ways of carrying out activities. This requires for the psychologist a shift from only modeling goals and tasks —why people do what they do—to modeling behavioral patterns—what people do—as they are engaged in purposeful activities. Instead of a model that exclusively deduces actions from goals, behaviors are also, if not primarily, driven by broader patterns of chronological and located activities (akin to scripts). To illustrate these ideas, this article presents an extract from a Brahms simulation of the Flashline Mars Arctic Research Station (FMARS), in which a crew of six people are living and working for a week, physically simulating a Mars surface mission. The example focuses on the simulation of a planning meeting, showing how physiological constraints (e.g., hunger, fatigue), facilities (e.g., the habitat’s layout) and group decision making interact. Methods are described for constructing such a model of practice, from video and first-hand observation, and how this modeling approach changes how one relates goals, knowledge, and cognitive architecture. The resulting simulation model is a powerful complement to task analysis and knowledge-based simulations of reasoning, with many practical applications for work system design, operations management, and training

    Aerospace Medicine and Biology. A continuing bibliography (Supplement 226)

    Get PDF
    This bibliography lists 129 reports, articles, and other documents introduced into the NASA scientific and technical information system in November 1981

    The psychology of driving automation: A discussion with Professor Don Norman

    Get PDF
    Introducing automation into automobiles had inevitable consequences for the driver and driving. Systems that automate longitudinal and lateral vehicle control may reduce the workload of the driver. This raises questions of what the driver is able to do with this 'spare' attentional capacity. Research in our laboratory suggests that there is unlikely to be any spare capacity because the attentional resources are not 'fixed'. Rather, the resources are inextricably linked to task demand. This paper presents some of the arguments for considering the psychological aspects of the driver when designing automation into automobiles. The arguments are presented in a conversation format, based on discussions with Professor Don Norman. Extracts from relevant papers to support the arguments are presented
    • …
    corecore