30,181 research outputs found
SARSCEST (human factors)
People interact with the processes and products of contemporary technology. Individuals are affected by these in various ways and individuals shape them. Such interactions come under the label 'human factors'. To expand the understanding of those to whom the term is relatively unfamiliar, its domain includes both an applied science and applications of knowledge. It means both research and development, with implications of research both for basic science and for development. It encompasses not only design and testing but also training and personnel requirements, even though some unwisely try to split these apart both by name and institutionally. The territory includes more than performance at work, though concentration on that aspect, epitomized in the derivation of the term ergonomics, has overshadowed human factors interest in interactions between technology and the home, health, safety, consumers, children and later life, the handicapped, sports and recreation education, and travel. Two aspects of technology considered most significant for work performance, systems and automation, and several approaches to these, are discussed
Driving automation: Learning from aviation about design philosophies
Full vehicle automation is predicted to be on British roads by 2030 (Walker et al., 2001). However, experience in aviation gives us some cause for concern for the 'drive-by-wire' car (Stanton and Marsden, 1996). Two different philosophies have emerged in aviation for dealing with the human factor: hard vs. soft automation, depending on whether the computer or the pilot has ultimate authority (Hughes and Dornheim, 1995). This paper speculates whether hard or soft automation provides the best solution for road vehicles, and considers an alternative design philosophy in vehicles of the future based on coordination and cooperation
In loco intellegentia: Human factors for the future European train driver
The European Rail Traffic Management System (ERTMS) represents a step change in technology for rail operations in Europe. It comprises track-to-train communications and intelligent on-board systems providing an unprecedented degree of support to the train driver. ERTMS is designed to improve safety, capacity and performance, as well as facilitating interoperability across the European rail network. In many ways, particularly from the human factors perspective, ERTMS has parallels with automation concepts in the aviation and automotive industries. Lessons learned from both these industries are that such a technology raises a number of human factors issues associated with train driving and operations. The interaction amongst intelligent agents throughout the system must be effectively coordinated to ensure that the strategic benefits of ERTMS are realised. This paper discusses the psychology behind some of these key issues, such as Mental Workload (MWL), interface design, user information requirements, transitions and migration and communications. Relevant experience in aviation and vehicle automation is drawn upon to give an overview of the human factors challenges facing the UK rail industry in implementing ERTMS technology. By anticipating and defining these challenges before the technology is implemented, it is hoped that a proactive and structured programme of research can be planned to meet them
Current Concepts and Trends in Human-Automation Interaction
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The purpose of this panel was to provide a general overview and discussion of some of the most current and controversial concepts and trends in human-automation interaction. The panel was composed of eight researchers and practitioners. The panelists are well-known experts in the area and offered differing views on a variety of different human-automation topics. The range of concepts and trends discussed in this panel include: general taxonomies regarding stages and levels of automation and function allocation, individualized adaptive automation, automation-induced complacency, economic rationality and the use of automation, the potential utility of false alarms, the influence of different types of false alarms on trust and reliance, and a system-wide theory of trust in multiple automated aids
Automotive automation: Investigating the impact on drivers' mental workload
Recent advances in technology have meant that an increasing number of vehicle driving
tasks are becoming automated. Such automation poses new problems for the ergonomist.
Of particular concern in this paper are the twofold effects of automation on mental
workload - novel technologies could increase attentional demand and workload,
alternatively one could argue that fewer driving tasks will lead to the problem of reduced
attentional demand and driver underload. A brief review of previous research is
presented, followed by an overview of current research taking place in the Southampton
Driving Simulator. Early results suggest that automation does reduce workload, and that
underload is indeed a problem, with a significant proportion of drivers unable to
effectively reclaim control of the vehicle in an automation failure scenario. Ultimately,
this research and a subsequent program of studies will be interpreted within the
framework of a recently proposed theory of action, with a view to maximizing both
theoretical and applied benefits of this domain
Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges
Human-swarm interaction (HSI) involves a number of human factors impacting
human behaviour throughout the interaction. As the technologies used within HSI
advance, it is more tempting to increase the level of swarm autonomy within the
interaction to reduce the workload on humans. Yet, the prospective negative
effects of high levels of autonomy on human situational awareness can hinder
this process. Flexible autonomy aims at trading-off these effects by changing
the level of autonomy within the interaction when required; with
mixed-initiatives combining human preferences and automation's recommendations
to select an appropriate level of autonomy at a certain point of time. However,
the effective implementation of mixed-initiative systems raises fundamental
questions on how to combine human preferences and automation recommendations,
how to realise the selected level of autonomy, and what the future impacts on
the cognitive states of a human are. We explore open challenges that hamper the
process of developing effective flexible autonomy. We then highlight the
potential benefits of using system modelling techniques in HSI by illustrating
how they provide HSI designers with an opportunity to evaluate different
strategies for assessing the state of the mission and for adapting the level of
autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling
  Conference, Canberra, Australi
Driver behaviour with adaptive cruise control
This paper reports on the evaluation of adaptive cruise control (ACC) from a psychological perspective. It was anticipated that ACC would have an effect upon the psychology of driving, i.e. make the driver feel like they have less control, reduce the level of trust in the vehicle, make drivers less situationally aware, but workload might be reduced and driving might be less stressful. Drivers were asked to drive in a driving simulator under manual and ACC conditions. Analysis of variance techniques were used to determine the effects of workload (i.e. amount of traffic) and feedback (i.e. degree of information from the ACC system) on the psychological variables measured (i.e. locus of control, trust, workload, stress, mental models and situation awareness). The results showed that: locus of control and trust were unaffected by ACC, whereas situation awareness, workload and stress were reduced by ACC. Ways of improving situation awareness could include cues to help the driver predict vehicle trajectory and identify conflicts
The ergonomics of command and control
Since its inception, just after the Second World War, ergonomics research has paid special attention to the issues surrounding human control of systems. Command and Control environments continue to represent a challenging domain for Ergonomics research. We take a broad view of Command and Control research, to include C2 (Command and Control), C3 (Command, Control and Communication), and C4 (Command, Control, Communication and Computers) as well as human supervisory control paradigms. This special issue of ERGONOMICS aims to present state-of-the-art research into models of team performance, evaluation of novel interaction technologies, case studies, methodologies and theoretical review papers. We are pleased to present papers that detail research on these topics in domains as diverse as the emergency services (e.g., police, fire, and ambulance), civilian applications (e.g., air traffic control, rail networks, and nuclear power) and military applications (e.g., land, sea and air) of command and control. While the domains of application are very diverse, many of the challenges they face share interesting similarities
Recommended from our members
Why Are People's Decisions Sometimes Worse with Computer Support?
In many applications of computerised decision support, a recognised source of undesired outcomes is operators' apparent over-reliance on automation. For instance, an operator may fail to react to a potentially dangerous situation because a computer fails to generate an alarm. However, the very use of terms like "over-reliance" betrays possible misunderstandings of these phenomena and their causes, which may lead to ineffective corrective action (e.g. training or procedures that do not counteract all the causes of the apparently "over-reliant" behaviour). We review relevant literature in the area of "automation bias" and describe the diverse mechanisms that may be involved in human errors when using computer support. We discuss these mechanisms, with reference to errors of omission when using "alerting systems", with the help of examples of novel counterintuitive findings we obtained from a case study in a health care application, as well as other examples from the literature
- …
