4,488 research outputs found

    Gaze Path Stimulation in Retrospective Think-Aloud

    Get PDF
    For a long time, eye tracking has been thought of as a promising method for usability testing. During the last couple of years, eye tracking has finally started to live up to these expectations, at least in terms of its use in usability laboratories. We know that the user’s gaze path can reveal usability issues that would otherwise go unnoticed, but a common understanding of how best to make use of eye movement data has not been reached. Many usability practitioners seem to have intuitively started to use gaze path replays to stimulate recall for retrospective walk through of the usability test. We review the research on thinkaloud protocols in usability testing and the use of eye tracking in the context of usability evaluation. We also report our own experiment in which we compared the standard, concurrent think-aloud method with the gaze path stimulated retrospective think-aloud method. Our results suggest that the gaze path stimulated retrospective think-aloud method produces more verbal data, and that the data are more informative and of better quality as the drawbacks of concurrent think-aloud have been avoided

    Comparison of in-sight and handheld navigation devices toward supporting industry 4.0 supply chains: First and last mile deliveries at the human level

    Get PDF
    Last (and First) mile deliveries are an increasingly important and costly component of supply chains especially those that require transport within city centres. With reduction in anticipated manufacturing and delivery timescales, logistics personnel are expected to identify the correct location (accurately) and supply the goods in appropriate condition (safe delivery). Moving towards more environmentally sustainable supply chains, the last/first mile of deliveries may be completed by a cyclist courier which could result in significant reductions in congestion and emissions in cities. In addition, the last metres of an increasing number of deliveries are completed on foot i.e. as a pedestrian. Although research into new technologies to support enhanced navigation capabilities is ongoing, the focus to date has been on technical implementations with limited studies addressing how information is perceived and actioned by a human courier. In the research reported in this paper a comparison study has been conducted with 24 participants evaluating two examples of state-of-the-art navigation aids to support accurate (right time and place) and safe (right condition) navigation. Participants completed 4 navigation tasks, 2 whilst cycling and 2 whilst walking. The navigation devices under investigation were a handheld display presenting a map and instructions and an in-sight monocular display presenting text and arrow instructions. Navigation was conducted in a real-world environment in which eye movements and device interaction were recorded using Tobii-Pro 2 eye tracking glasses. The results indicate that the handheld device provided better support for accurate navigation (right time and place), with longer but less frequent gaze interactions and higher perceived usability. The in-sight display supported improved situation awareness with a greater number of hazards acknowledged. The benefits and drawbacks of each device and use of visual navigation support tools are discussed

    User Experience in Virtual Reality, conducting an evaluation on multiple characteristics of a Virtual Reality Experience

    Get PDF
    Virtual Reality applications are today numerous and cover a wide range of interests and tastes. As popularity of Virtual Reality increases, developers in industry are trying to create engrossing and exciting experiences that captivate the interest of users. User-Experience, a term used in the field of Human-Computer Interaction and Interaction Design, describes multiple characteristics of the experience of a person interacting with a product or a system. Evaluating User-Experience can provide valuable insight to developers and researchers on the thoughts and impressions of the end users in relation to a system. However, little information exists regarding on how to conduct User-Experience evaluations in the context of Virtual Reality. Consecutively, due to the numerous parameters that influence User-Experience in Virtual Reality, conducting and organizing evaluations can be overwhelming and challenging. The author of this thesis investigated how to conduct a User-Experience evaluation on multiple aspects of a Virtual Reality headset by identifying characteristics of the experience, and the methods that can be used to measure and evaluate them. The data collected was both qualitative and quantitative to cover a wide range of characteristics of the experience. Furthermore, the author applied usability testing, think-aloud protocol, questionnaires and semi-structured interview as methods to observe user behavior and collect information regarding the aspects of the Virtual Reality headset. The testing session described in this study included 14 participants. Data from this study showed that the combination of chosen methods were able to provide adequate information regarding the experience of the users despite encountered difficulties. Additionally, this thesis showcases which methods were used to evaluate specific aspects of the experience and the performance of each method as findings of the study

    Interaction and Learning in an Extensive Reading Book Club.

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2017

    Evaluating the user experience of an augmented reality application using gaze tracking and retrospective think-aloud

    Get PDF
    Gaze tracking has previously been used to evaluate usability, but research using gaze tracking to evaluate user experience has not been conducted or is very limited. The objective of the thesis is to examine the possibility of using gaze tracking in user experience evaluation and providing results comparable with other forms of user experience evaluations. A convenience sample of ten participants took part in an experiment to evaluate the user experience of an augmented reality application. Gaze tracking was used as a cue to help participants recall their user experience in a retrospective think-aloud. Participants also filled in a user experience questionnaire and were interviewed about their experience of using the application. The results of the experiment suggest that gaze tracking can be used in measuring user experience when combined with the retrospective think-aloud method. The quotes generated can be used to establish which features or qualities of the application affected the user experience of participants. The method establishes a basis for further research for using gaze tracking to evaluate user experience

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Mixed Reality Interfaces for Augmented Text and Speech

    Get PDF
    While technology plays a vital role in human communication, there still remain many significant challenges when using them in everyday life. Modern computing technologies, such as smartphones, offer convenient and swift access to information, facilitating tasks like reading documents or communicating with friends. However, these tools frequently lack adaptability, become distracting, consume excessive time, and impede interactions with people and contextual information. Furthermore, they often require numerous steps and significant time investment to gather pertinent information. We want to explore an efficient process of contextual information gathering for mixed reality (MR) interfaces that provide information directly in the user’s view. This approach allows for a seamless and flexible transition between language and subsequent contextual references, without disrupting the flow of communication. ’Augmented Language’ can be defined as the integration of language and communication with mixed reality to enhance, transform, or manipulate language-related aspects and various forms of linguistic augmentations (such as annotation/referencing, aiding social interactions, translation, localization, etc.). In this thesis, our broad objective is to explore mixed reality interfaces and their potential to enhance augmented language, particularly in the domains of speech and text. Our aim is to create interfaces that offer a more natural, generalizable, on-demand, and real-time experience of accessing contextually relevant information and providing adaptive interactions. To better address this broader objective, we systematically break it down to focus on two instances of augmented language. First, enhancing augmented conversation to support on-the-fly, co-located in-person conversations using embedded references. And second, enhancing digital and physical documents using MR to provide on-demand reading support in the form of different summarization techniques. To examine the effectiveness of these speech and text interfaces, we conducted two studies in which we asked the participants to evaluate our system prototype in different use cases. The exploratory usability study for the first exploration confirms that our system decreases distraction and friction in conversation compared to smartphone search while providing highly useful and relevant information. For the second project, we conducted an exploratory design workshop to identify categories of document enhancements. We later conducted a user study with a mixed-reality prototype to highlight five board themes to discuss the benefits of MR document enhancement

    Augmented Reality to Facilitate a Conceptual Understanding of Statics in Vocational Education

    Get PDF
    At the core of the contribution of this dissertation there is an augmented reality (AR) environment, StaticAR, that supports the process of learning the fundamentals of statics in vocational classrooms, particularly in carpentry ones. Vocational apprentices are expected to develop an intuition of these topics rather than a formal comprehension. We have explored the potentials of the AR technology for this pedagogical challenge. Furthermore, we have investigated the role of physical objects in mixed-reality systems when they are implemented as tangible user interfaces (TUIs) or when they serve as a background for the augmentation in handheld AR. This thesis includes four studies. In the first study, we used eye-tracking methods to look for evidences of the benefits associated to TUIs in the learning context. We designed a 3D modelling task and compared users' performance when they completed it using a TUI or a GUI. The gaze measures that we analysed further confirmed the positive impact that TUIs can have on the learners' experience and enforced the empirical basis for their adoption in learning applications. The second study evaluated whether the physical interaction with models of carpentry structures could lead to a better understanding of statics principles. Apprentices engaged in a learning activity in which they could manipulate physical models that were mechanically augmented, allowing for exploring how structures react to external loads. The analysis of apprentices' performance and their gaze behaviors highlighted the absence of clear advantages in exploring statics through manipulation. This study also showed that the manipulation might prevent students from noticing aspects relevant for solving statics problems. From the second study we obtained guidelines to design StaticAR which implements the magic-lens metaphor: a tablet augments a small-scale structure with information about its structural behavior. The structure is only a background for the augmentation and its manipulation does not trigger any function, so in the third study we asked to what extent it was important to have it. We rephrased this question to whether users would look directly at the structure instead of seeing it only through a tablet. Our findings suggested that a shift of attention from the screen to the physical object (a structure in our case) might occur in order to sustain users' spatial orientation when they change positions. In addition, the properties of the gaze shift (e.g. duration) could depend on the features of the task (e.g. difficulty) and of the setup (e.g. stability of the augmentation). The focus of our last study was the digital representation of the forces that act in a loaded structure. From the second study we observed that the physical manipulation failed to help apprentices understanding the way the forces interact with each other. To overcome this issue, our solution was to combine an intuitive representation (springs) with a slightly more formal one (arrows) which would show both the nature of the forces and the interaction between them. In this study apprentices used the two representations to collaboratively solve statics problems. Even though apprentices had difficulties in interpreting the two representations, there were cases in which they gained a correct intuition of statics principles from them. In this thesis, besides describing the designed system and the studies, implications for future directions are discussed
    • 

    corecore