CORE
CO
nnecting
RE
positories
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Research partnership
About
About
About us
Our mission
Team
Blog
FAQs
Contact us
Community governance
Governance
Advisory Board
Board of supporters
Research network
Innovations
Our research
Labs
research
Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: A randomized, controlled study
Authors
A Prabhu
AN Healey
+44 more
B Abernethy
C Sugden
C Szafranski
D Stefanidis
D Stefanidis
D Stefanidis
DA Wiegmann
David Defriend
E Sutton
Elizabeth Bright
EM Ritter
FF Zhu
G Wulf
J Cohen
JN Vickers
John S. McGrath
JR Pluyter
K Hsu
K Yarrow
KH Goodell
KR Wanzel
M Wilson
M Wilson
Mark R. Wilson
MF Land
PB Andreatta
PB Andreatta
PJ Fabri
PM Fitts
R Aggarwal
Rich S. W. Masters
RS Masters
RSW Masters
RSW Masters
RSW Masters
S Arora
S Arora
S Yamaguchi
Samuel J. Vine
SJ Vine
SJ Vine
SJ Vine
U Sailer
W James
Publication date
1 January 2011
Publisher
'Springer Science and Business Media LLC'
Doi
View
on
PubMed
Abstract
Background: The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Methods: Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Results: Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. Conclusions: These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks. © 2011 The Author(s).published_or_final_versionSpringer Open Choice, 21 Feb 201
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Supporting member
Open Research Exeter
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:ore.exeter.ac.uk:10871/954...
Last time updated on 06/08/2013
Springer - Publisher Connector
See this paper in CORE
Go to the repository landing page
Download from data provider
Last time updated on 04/06/2019
Springer - Publisher Connector
See this paper in CORE
Go to the repository landing page
Download from data provider
Last time updated on 01/05/2017
Crossref
See this paper in CORE
Go to the repository landing page
Download from data provider
info:doi/10.1007%2Fs00464-011-...
Last time updated on 01/04/2019
HKU Scholars Hub
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:hub.hku.hk:10722/143393
Last time updated on 01/06/2016