1,420 research outputs found

    Opportunities for using eye tracking technology in manufacturing and logistics: Systematic literature review and research agenda

    Get PDF
    Workers play essential roles in manufacturing and logistics. Releasing workers from routine tasks and enabling them to focus on creative, value-adding activities can enhance their performance and wellbeing, and it is also key to the successful implementation of Industry 4.0. One technology that can help identify patterns of worker-system interaction is Eye Tracking (ET), which is a non-intrusive technology for measuring human eye movements. ET can provide moment-by-moment insights into the cognitive state of the subject during task execution, which can improve our understanding of how humans behave and make decisions within complex systems. It also enables explorations of the subject’s interaction mode with the working environment. Earlier research has investigated the use of ET in manufacturing and logistics, but the literature is fragmented and has not yet been discussed in a literature review yet. This article therefore conducts a systematic literature review to explore the applications of ET, summarise its benefits, and outline future research opportunities of using ET in manufacturing and logistics. We first propose a conceptual framework to guide our study and then conduct a systematic literature search in scholarly databases, obtaining 71 relevant papers. Building on the proposed framework, we systematically review the use of ET and categorize the identified papers according to their application in manufacturing (product development, production, quality inspection) and logistics. Our results reveal that ET has several use cases in the manufacturing sector, but that its application in logistics has not been studied extensively so far. We summarize the benefits of using ET in terms of process performance, human performance, and work environment and safety, and also discuss the methodological characteristics of the ET literature as well as typical ET measures used. We conclude by illustrating future avenues for ET research in manufacturing and logistics

    Augmenting Situated Spoken Language Interaction with Listener Gaze

    Get PDF
    Collaborative task solving in a shared environment requires referential success. Human speakers follow the listener’s behavior in order to monitor language comprehension (Clark, 1996). Furthermore, a natural language generation (NLG) system can exploit listener gaze to realize an effective interaction strategy by responding to it with verbal feedback in virtual environments (Garoufi, Staudte, Koller, & Crocker, 2016). We augment situated spoken language interaction with listener gaze and investigate its role in human-human and human-machine interactions. Firstly, we evaluate its impact on prediction of reference resolution using a mulitimodal corpus collection from virtual environments. Secondly, we explore if and how a human speaker uses listener gaze in an indoor guidance task, while spontaneously referring to real-world objects in a real environment. Thirdly, we consider an object identification task for assembly under system instruction. We developed a multimodal interactive system and two NLG systems that integrate listener gaze in the generation mechanisms. The NLG system “Feedback” reacts to gaze with verbal feedback, either underspecified or contrastive. The NLG system “Installments” uses gaze to incrementally refer to an object in the form of installments. Our results showed that gaze features improved the accuracy of automatic prediction of reference resolution. Further, we found that human speakers are very good at producing referring expressions, and showing listener gaze did not improve performance, but elicited more negative feedback. In contrast, we showed that an NLG system that exploits listener gaze benefits the listener’s understanding. Specifically, combining a short, ambiguous instruction with con- trastive feedback resulted in faster interactions compared to underspecified feedback, and even outperformed following long, unambiguous instructions. Moreover, alternating the underspecified and contrastive responses in an interleaved manner led to better engagement with the system and an effcient information uptake, and resulted in equally good performance. Somewhat surprisingly, when gaze was incorporated more indirectly in the generation procedure and used to trigger installments, the non-interactive approach that outputs an instruction all at once was more effective. However, if the spatial expression was mentioned first, referring in gaze-driven installments was as efficient as following an exhaustive instruction. In sum, we provide a proof of concept that listener gaze can effectively be used in situated human-machine interaction. An assistance system using gaze cues is more attentive and adapts to listener behavior to ensure communicative success

    Eye Tracking: A Perceptual Interface for Content Based Image Retrieval

    Get PDF
    In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits of eye tracking as an image retrieval interface in terms of speed relative to selection by mouse, and in terms of the efficiency of eye tracking mechanisms in the task of retrieving target images. Results are analysed using ANOVA and significant findings are discussed. Results show that eye selection was faster than a computer mouse and experience gained during visual tasks carried out using a mouse would benefit users if they were subsequently transferred to an eye tracking system. Results on the image retrieval experiments show that users are able to navigate to a target image within a database confirming the feasibility of an eye gaze driven search mechanism. Additional histogram analysis of the fixations, saccades and pupil diameters in the human eye movement data revealed a new method of extracting intentions from gaze behaviour for image search, of which the user was not aware and promises even quicker search performances. The research has two implications for Content Based Image Retrieval: (i) improvements in query formulation for visual search and (ii) new methods for visual search using attentional weighting. Futhermore it was demonstrated that users are able to find target images at sufficient speeds indicating that pre-attentive activity is playing a role in visual search. A current review of eye tracking technology, current applications, visual perception research, and models of visual attention is discussed. A review of the potential of the technology for commercial exploitation is also presented

    Designing Attentive Information Dashboards with Eye Tracking Technology

    Get PDF

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Show Me What You See, Tell Me What You Think: Using Eye Tracking for Hospitality Research

    Get PDF
    Identifying precisely what consumers are looking at (and by implication what they are thinking) when they consider a web page, an image, or a hospitality environment could provide tremendous insights to the hospitality industry. By using eye tracking technology, one can almost literally see through the eyes of the customer to find out what information is examined at various points during the hotel search process or to assess which property design features attract guests’ attention. When eye tracking is immediately followed by interviews that review a graphical representation of the consumer’s eye movements, the thought processes behind consumers’ visual activity can be uncovered and explored. In this paper we explain how eye tracking works and how it could apply to hospitality research. Today’s eye tracking systems are easy for researchers to set up and use and are virtually transparent to the participant during use, making eye tracking a valuable method for examining consumer choice or facility design, or to develop employee training procedures. We argue that eye tracking would provide rich results and deserves to be considered for a wide range of hospitality applications
    • …
    corecore