17 research outputs found
Cognitive load in intralingual and interlingual respeaking-a preliminary study
In this paper we present preliminary results of the study on the cognitive load in intralingual and interlingual respeaking. We tested 57 subjects from three groups: interpreters, translators and controls while respeaking 5-minute videos in two language combinations: Polish to Polish (intralingual) and English to Polish (interlingual). Using two measures of cognitive load: self-report and EEG (Emotiv), we found that in most cases cognitive load was higher in interlingual respeaking. Self-reported mental effort that the participants had to expend to complete the respeaking tasks was lower in the group of interpreters, suggesting some parallels between interpreting and respeaking competences. EEG measures showed significant differences between respeaking tasks and experimental groups in cognitive load over time
Are interpreters better respeakers?
In this study, we examined whether interpreters and interpreting trainees are better predisposed to respeaking than people with no interpreting skills. We tested 57 participants (22 interpreters, 23 translators and 12 controls) while respeaking 5-minute videos with two parameters: speech rate (fast/slow) and number of speakers (one/many). Having measured the quality of the respeaking performance using two independent methods: the NER model and rating, we found that interpreters consistently achieved higher scores than the other two groups. The findings are discussed in the context of transfer of skills, expert performance and respeaking training
Respeaking crisis points. An exploratory study into critical moments in the respeaking process
In this paper we introduce respeaking crisis points (RCPs), understood as potentially problematic moments in the respeaking process, resulting from the difficulty of the source material and/or cognitive overload on the part of the respeakers. We present results of the respeaking study on Polish participants who respoke four videos intralingually (Polish to Polish) and one interlingually (from English to Polish). We measured the participantsâ cognitive load with EEG (Emotiv) using two measures: concentration and frustration. By analysing peaks in both EEG measures, we show where respeaking crisis points occurred. Features that triggered RCPs include very slow and very fast speech rate, visual complexity of the material, overlapping speech, numbers and proper names, speaker changes, word play, syntactic complexity, and implied meaning. The results of this study are directly applicable to respeaker training and provide valuable insights into the respeaking process from the cognitive perspective
Driver glance behaviors and scanning patterns: Applying static and dynamic glance measures to the analysis of curve driving with secondary tasks
Performing secondary tasks (or nonâdrivingârelated tasks) while driving on curved roads may be risky and unsafe. The purpose of this study was to explore whether driving safety in situations involving curved roads and secondary tasks can be evaluated using multiple measures of eye movement. We adopted Markovâbased transition algorithms (i.e., transition/stationary probabilities, entropy) to quantify driversâ dynamic eye movement patterns, in addition to typical static visual measures, such as frequency and duration of glances. The algorithms were evaluated with data from an experiment (Jeong & Liu, 2019) involving multiple road curvatures and stimulusâresponse secondary task types. Drivers were more likely to scan only a few areas of interest with a long duration in sharper curves. Total headâdown glance time was longer in less sharp curves in the experiment, but the probability of headâdown glances was higher in sharper curves over the long run. The number of reliable transitions between areas of interest varied with the secondary task type. The visual scanning patterns for visually undemanding tasks were as random as those for visually demanding tasks. Markovâbased measures of dynamic eye movements provided insights to better understand driversâ underlying mental processes and scanning strategies, compared with typical static measures. The presented methods and results can be useful for inâvehicle systems design and for further analysis of visual scanning patterns in the transportation domain.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/151975/1/hfm20798_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/151975/2/hfm20798.pd
Enabling Inclusive and Meaningful Tourist Experiences
The advent of Industry 4.0 technologies, encompassing the Internet of Things (IoT), Big data analytics, artificial intelligence (AI), blockchain, location-based services, and virtual and augmented (VR/AR) reality systems, has revolu-tionized the tourism landscape, automating production and service delivery. As the momentum of Industry 4.0 propels us toward the tourism-specific concept of Tourism 4.0, questions arise about the ability of humans to keep pace with the rapid techno-logical advancements and ensure these innovations genuinely benefit society. The ongoing debate prompts a call for humanizing Industry 4.0, echoed in the emerging concept of Industry 5.0, advocating for more responsible and humane technology approaches. Concurrently, voices championing Tourism 5.0 emphasize the need to align technology with diverse human tourism needs and enhance accessibility for a more inclusive and meaningful travel experience. Through this chapter, we endeavor to establish Tourist 5.0 as a holistic alternative to the prevailing concept of digital accessibility practices within the typically limited and task-focused tourism sector. This chapter critically examines the evolution from Industry 4.0 to Industry 5.0, drawing parallels with Tourism 4.0 and Tourism 5.0. The central focus of this chapter is placed on the imperative of technological accessibility, exploring how it takes
precedence in the latest technological developments and contributes to the creation of more inclusive and fulfilling tourism experience
PACMHCI V7, ETRA, May 2023 Editorial
In 2022, ETRA moved its publication of full papers to a journal-based model, and we are delighted to present the second issue of the Proceedings of the ACM on Human-Computer Interaction to focus on contributions from the Eye Tracking Research and Applications (ETRA) community. ETRA is the premier eye-tracking conference that brings together researchers from across disciplines to present advances and innovations in oculomotor research, eye tracking systems, eye movement data analysis, eye tracking applications, and gaze-based interaction. This issue presents 13 full papers accepted for presentation at ETRA 2023 (May 30 - June 2, 2023, in TĂŒbingen, Germany) selected from 37 submissions (35% acceptance rate). We are grateful to all authors for the exciting contributions they have produced and to the Editorial Board and external reviewers for their effort during the entire rigorous reviewing process which resulted in high-quality and insightful reviews for all submitted articles
VR 360Âș subtitles: Designing a test suite with eye-tracking technology
Subtitle production is an increasingly creative accessibility service. New technologies allow for placing subtitles at any location of the screen in a variety of formats, shapes, typography, font size, and colour. The screen now affords accessible creativity, with subtitles able to provide novel experiences beyond those offered by traditional language translation. Immersive environments multiply 2D subtitles features to new creative viewing modalities. Testing subtitles in eXtended Reality (XR) has pushed existing methods to address user need and enjoyment of audiovisual content in 360Âș viewing displays. After an overview of existing subtitle features in XR, the article describes the challenges of generating subtitle stimuli to test meaningful user viewing behaviours, based on eye-tracking technology. The approach for the first experimental setup for implementing creative subtitles in XR using eye-tracking is given, in line with novel research questions. The choices made regarding sound, duration and storyboard are described. Conclusions show that testing subtitles in immersive media environments is both a linguistic and an artistic endeavour, which requires an agile framework fostering contrast and comparison of different functionalities. Results of the present, preliminary study shed light on future experimental setups with eye-tracking
Predicting image influence on visual saliency distribution: the focal and ambient dichotomy
International audienceThe computational modelling of visual attention relies entirely on visual fixations that are collected during eye-tracking experiments. Although all fixations are assumed to follow the same attention paradigm, some studies suggest the existence of two visual processing modes, called ambient and focal. In this paper, we present the high discrepancy between focal and ambient saliency maps and propose an automatic method for inferring the degree of focalness of an image. This method opens new avenues for the computational modelling of saliency models and their benchmarking