6,029 research outputs found

    In loco intellegentia: Human factors for the future European train driver

    Get PDF
    The European Rail Traffic Management System (ERTMS) represents a step change in technology for rail operations in Europe. It comprises track-to-train communications and intelligent on-board systems providing an unprecedented degree of support to the train driver. ERTMS is designed to improve safety, capacity and performance, as well as facilitating interoperability across the European rail network. In many ways, particularly from the human factors perspective, ERTMS has parallels with automation concepts in the aviation and automotive industries. Lessons learned from both these industries are that such a technology raises a number of human factors issues associated with train driving and operations. The interaction amongst intelligent agents throughout the system must be effectively coordinated to ensure that the strategic benefits of ERTMS are realised. This paper discusses the psychology behind some of these key issues, such as Mental Workload (MWL), interface design, user information requirements, transitions and migration and communications. Relevant experience in aviation and vehicle automation is drawn upon to give an overview of the human factors challenges facing the UK rail industry in implementing ERTMS technology. By anticipating and defining these challenges before the technology is implemented, it is hoped that a proactive and structured programme of research can be planned to meet them

    Analysing and modelling train driver performance

    Get PDF
    Arguments for the importance of contextual factors in understanding human performance have been made extremely persuasive in the context of the process control industries. This paper puts these arguments into the context of the train driving task, drawing on an extensive analysis of driver performance with the Automatic Warning System (AWS). The paper summarises a number of constructs from applied psychological research which are thought to be important in understanding train driver performance. A “Situational Model” is offered as a framework for investigating driver performance. The model emphasises the importance of understanding the state of driver cognition at a specific time (“Now”) in a specific situation and a specific context

    Driving automation: Learning from aviation about design philosophies

    Get PDF
    Full vehicle automation is predicted to be on British roads by 2030 (Walker et al., 2001). However, experience in aviation gives us some cause for concern for the 'drive-by-wire' car (Stanton and Marsden, 1996). Two different philosophies have emerged in aviation for dealing with the human factor: hard vs. soft automation, depending on whether the computer or the pilot has ultimate authority (Hughes and Dornheim, 1995). This paper speculates whether hard or soft automation provides the best solution for road vehicles, and considers an alternative design philosophy in vehicles of the future based on coordination and cooperation

    Vision of a Visipedia

    Get PDF
    The web is not perfect: while text is easily searched and organized, pictures (the vast majority of the bits that one can find online) are not. In order to see how one could improve the web and make pictures first-class citizens of the web, I explore the idea of Visipedia, a visual interface for Wikipedia that is able to answer visual queries and enables experts to contribute and organize visual knowledge. Five distinct groups of humans would interact through Visipedia: users, experts, editors, visual workers, and machine vision scientists. The latter would gradually build automata able to interpret images. I explore some of the technical challenges involved in making Visipedia happen. I argue that Visipedia will likely grow organically, combining state-of-the-art machine vision with human labor

    Learning from Automation Surprises and "Going Sour" Accidents: Progress on Human-Centered Automation

    Get PDF
    Advances in technology and new levels of automation on commercial jet transports has had many effects. There have been positive effects from both an economic and a safety point of view. The technology changes on the flight deck also have had reverberating effects on many other aspects of the aviation system and different aspects of human performance. Operational experience, research investigations, incidents, and occasionally accidents have shown that new and sometimes surprising problems have arisen as well. What are these problems with cockpit automation, and what should we learn from them? Do they represent over-automation or human error? Or instead perhaps there is a third possibility - they represent coordination breakdowns between operators and the automation? Are the problems just a series of small independent glitches revealed by specific accidents or near misses? Do these glitches represent a few small areas where there are cracks to be patched in what is otherwise a record of outstanding designs and systems? Or do these problems provide us with evidence about deeper factors that we need to address if we are to maintain and improve aviation safety in a changing world? How do the reverberations of technology change on the flight deck provide insight into generic issues about developing human-centered technologies and systems (Winograd and Woods, 1997)? Based on a series of investigations of pilot interaction with cockpit automation (Sarter and Woods, 1992; 1994; 1995; 1997a, 1997 b), supplemented by surveys, operational experience and incident data from other studies (e.g., Degani et al., 1995; Eldredge et al., 1991; Tenney et al., 1995; Wiener, 1989), we too have found that the problems that surround crew interaction with automation are more than a series of individual glitches. These difficulties are symptoms that indicate deeper patterns and phenomena concerning human-machine cooperation and paths towards disaster. In addition, we find the same kinds of patterns behind results from studies of physician interaction with computer-based systems in critical care medicine (e.g., Moll van Charante et al., 1993; Obradovich and Woods, 1996; Cook and Woods, 1996). Many of the results and implications of this kind of research are synthesized and discussed in two comprehensive volumes, Billings (1996) and Woods et al. (1994). This paper summarizes the pattern that has emerged from our research, related research, incident reports, and accident investigations. It uses this new understanding of why problems arise to point to new investment strategies that can help us deal with the perceived "human error" problem, make automation more of a team player, and maintain and improve safety

    Understanding Automation Surprise: Analysis of ASRS Reports

    Get PDF
    Pilots are frequently surprised by aircraft automation. These include cases in which the automation: 1) produces alerts to anomalies, 2) commands unexpected control manipulations (that may result in flight path deviations), or 3) simply disconnects. Aviation Safety Reporting System (ASRS) reports in which pilots indicated that automation produced unexpected actions were analyzed. Three general conclusions were drawn. First, many factors precipitate automation surprises. These include problems in: the auto-flight system and associated displays and interfaces, other aircraft sensors and systems, and interactions with weather and ATC. Second, inappropriate pilot actions are involved in a large proportion of these events. Third, recovery need not require reversion to manual control. There is no single general intervention that can prevent automation surprise or completely mitigate its effects. However, several different tacks (including improved training, displays, and coordination with ATC) taken together may be effective
    corecore