14,044 research outputs found
A proposed psychological model of driving automation
This paper considers psychological variables pertinent to driver automation. It is anticipated that driving with automated systems is likely to have a major impact on the drivers and a multiplicity of factors needs to be taken into account. A systems analysis of the driver, vehicle and automation served as the basis for eliciting psychological factors. The main variables to be considered were: feed-back, locus of control, mental workload, driver stress, situational awareness and mental representations. It is expected that anticipating the effects on the driver brought about by vehicle automation could lead to improved design strategies. Based on research evidence in the literature, the psychological factors were assembled into a model for further investigation
Recommended from our members
Why Are People's Decisions Sometimes Worse with Computer Support?
In many applications of computerised decision support, a recognised source of undesired outcomes is operators' apparent over-reliance on automation. For instance, an operator may fail to react to a potentially dangerous situation because a computer fails to generate an alarm. However, the very use of terms like "over-reliance" betrays possible misunderstandings of these phenomena and their causes, which may lead to ineffective corrective action (e.g. training or procedures that do not counteract all the causes of the apparently "over-reliant" behaviour). We review relevant literature in the area of "automation bias" and describe the diverse mechanisms that may be involved in human errors when using computer support. We discuss these mechanisms, with reference to errors of omission when using "alerting systems", with the help of examples of novel counterintuitive findings we obtained from a case study in a health care application, as well as other examples from the literature
In loco intellegentia: Human factors for the future European train driver
The European Rail Traffic Management System (ERTMS) represents a step change in technology for rail operations in Europe. It comprises track-to-train communications and intelligent on-board systems providing an unprecedented degree of support to the train driver. ERTMS is designed to improve safety, capacity and performance, as well as facilitating interoperability across the European rail network. In many ways, particularly from the human factors perspective, ERTMS has parallels with automation concepts in the aviation and automotive industries. Lessons learned from both these industries are that such a technology raises a number of human factors issues associated with train driving and operations. The interaction amongst intelligent agents throughout the system must be effectively coordinated to ensure that the strategic benefits of ERTMS are realised. This paper discusses the psychology behind some of these key issues, such as Mental Workload (MWL), interface design, user information requirements, transitions and migration and communications. Relevant experience in aviation and vehicle automation is drawn upon to give an overview of the human factors challenges facing the UK rail industry in implementing ERTMS technology. By anticipating and defining these challenges before the technology is implemented, it is hoped that a proactive and structured programme of research can be planned to meet them
Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges
Human-swarm interaction (HSI) involves a number of human factors impacting
human behaviour throughout the interaction. As the technologies used within HSI
advance, it is more tempting to increase the level of swarm autonomy within the
interaction to reduce the workload on humans. Yet, the prospective negative
effects of high levels of autonomy on human situational awareness can hinder
this process. Flexible autonomy aims at trading-off these effects by changing
the level of autonomy within the interaction when required; with
mixed-initiatives combining human preferences and automation's recommendations
to select an appropriate level of autonomy at a certain point of time. However,
the effective implementation of mixed-initiative systems raises fundamental
questions on how to combine human preferences and automation recommendations,
how to realise the selected level of autonomy, and what the future impacts on
the cognitive states of a human are. We explore open challenges that hamper the
process of developing effective flexible autonomy. We then highlight the
potential benefits of using system modelling techniques in HSI by illustrating
how they provide HSI designers with an opportunity to evaluate different
strategies for assessing the state of the mission and for adapting the level of
autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling
Conference, Canberra, Australi
Response Criterion Placement Modulates the Effects of Graded Alerting Systems on Human Performance and Learning in a Target Detection Task
Human operators can perform better with the use of an automated diagnostic aid than without the use of an aid in a signal detection task. This experiment aimed to determine whether any differences existed among graded aids—automated diagnostic aids that use a scale of confidence levels reflecting a spectrum of probabilistic information or uncertainty when making a judgment—that enabled better human detection performance, and either binary or graded aid produced better learning. Participants performed a visual search framed as a medical decision making task. Stimuli were arrays of random polygons (“cells”) generated by distorting a prototype shape. The target was a shape more strongly distorted than the accompanying distracters. A target was present on half of the trials. Each participant performed the task with the assistance of either a binary aid, one of three graded aids, or no aid. The aids’ sensitivities were the same (d′ = 2); the difference between the aids lay in the placement of their decision criteria, which determines a tradeoff between the aid’s predictive value and the frequency with which it makes a diagnosis. The graded aid with 90% reliability provided a judgment on the greatest number of trials, the graded aid with 94% reliability gave a judgment on fewer trials, and the third graded aid with 96% reliability gave a judgment on the least number of trials. The binary aid with 84% reliability gave a judgment on each trial. All aids improved human detection performance, though the graded aids trended towards improving performance more than the binary aid. The binary and graded aids did not produce significantly better or worse learning than did unaided performance. The binary and graded aids did not significantly help learning, but they certainly did not worsen human detection performance when compared to the unaided condition. These results imply that the decision boundaries of a graded alert might be fixed to encourage appropriate reliance on the aid and improve human detection performance, and indicate employing either a graded or binary automated aid may be beneficial to learning in a detection task
Human mismatches and preferences for automation
The research reported in this paper is concerned with gaining a better understanding of human factors issues in machining and the automation of manufacturing tasks. Mismatches between operators' performance and the requirements of machining tasks were experimentally studied with respect to the relationships with various human characteristics, including skill, age, work experience, self-confidence and trust. Twelve hypotheses concerning interrelationships between these characteristics were evaluated and important relationships established. It is considered that this increased knowledge of the rate of mismatches and an understanding of the causes is essential for the successful design of new working environments, machines and tasks. Much of this change to the working environment is likely to involve some degree of automation of the operators' tasks and so a second and important aspect of the study was designed to establish the extent to which preferred levels of automation were related to the same human characteristics. Four further hypotheses relating preferred levels of automation to skill, age, work experience, self-confidence and trust were tested with results that, in some cases, were unexpected and in others contradict the findings of previous research. © 2000 Taylor & Francis Ltd
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Recommended from our members
Occupational Cultures as a Challenge to Technological Innovation
This paper explains conflict over technological process innovation in cultural terms, drawing primarily on a case study of electric power distribution and strategies to automate its operation
Attention and automation: New perspectives on mental underload and performance
There is considerable evidence in the ergonomics literature that automation can significantly reduce operator mental workload. Furthermore, reducing mental workload is not necessarily a good thing, particularly in cases where the level is already manageable. This raises the issue of mental underload, which can be at least as detrimental to performance as overload. However, although it is widely recognized that mental underload is detrimental to performance, there are very few attempts to explain why this may be the case. It is argued in this paper that, until the need for a human operator is completely eliminated, automation has psychological implications relevant in both theoretical and applied domains. The present paper reviews theories of attention, as well as the literature on mental workload and automation, to synthesize a new explanation for the effects of mental underload on performance. Malleable attentional resources theory proposes that attentional capacity shrinks to accommodate reductions in mental workload, and that this shrinkage is responsible for the underload effect. The theory is discussed with respect to the applied implications for ergonomics research
- …