4 research outputs found

    Overcoming the limitations of commodity augmented reality head mounted displays for use in product assembly

    Get PDF
    Numerous studies have shown the effectiveness of utilizing Augmented Reality (AR) to deliver work instructions for complex assemblies. Traditionally, this research has been performed using hand-held displays, such as smartphones and tablets, or custom-built Head Mounted Displays (HMDs). AR HMDs have been shown to be especially effective for assembly tasks as they allow the user to remain hands-free while receiving work instructions. Furthermore, in recent years a wave of commodity AR HMDs have come to market including the Microsoft HoloLens, Magic Leap One, Meta 2, and DAQRI Smart Glasses. These devices present a unique opportunity for delivering assembly instructions due to their relatively low cost and accessibility compared to custom built AR HMD solutions of the past. Despite these benefits, the technology behind these HMDs still contains many limitations including input, user interface, spatial registration, navigation and occlusion. To accurately deliver work instructions for complex assemblies, the hardware limitations of these commodity AR HMDs must be overcome. For this research, an AR assembly application was developed for the Microsoft HoloLens using methods specifically designed to address the aforementioned issues. Input and user interface methods were implemented and analyzed to maximize the usability of the application. An intuitive navigation system was developed to guide users through a large training environment, leading them to the current point of interest. The native tracking system of the HoloLens was augmented with image target tracking capabilities to stabilize virtual content, enhance accuracy, and account for spatial drift. This fusion of marker-based and marker-less tracking techniques provides a novel approach to display robust AR assembly instructions on a commodity AR HMD. Furthermore, utilizing this novel spatial registration approach, the position of real-world objects was accurately registered to properly occlude virtual work instructions. To render the desired effect, specialized computer graphics methods and custom shaders were developed and implemented for an AR assembly application. After developing novel methods to display work instructions on a commodity AR HMD, it was necessary to validate that these work instructions were being accurately delivered. Utilizing the sensors on the HoloLens, data was collected during the assembly process regarding head position, orientation, assembly step times, and an estimation of spatial drift. With the addition of wearable physiological sensor data, this data was fused together in a visualization application to validate instructions were properly delivered and provide an opportunity for an analysist to examine trends within an assembly session. Additionally, the spatial drift data was then analyzed to gain a better understanding of how spatial drift accumulates over time and ensure that the spatial registration mitigation techniques was effective. Academic research has shown that AR may substantial reduce cost for assembly operations through a reduction in errors, time, and cognitive workload. This research provides novel solutions to overcome the limitations of commodity AR HMDs and validate their use for product assembly. Furthermore, the research provided in this thesis demonstrates the potential of commodity AR HMDs and how their limitations can be mitigated for use in product assembly tasks

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    Haptic feedback to gaze events

    Get PDF
    Eyes are the window to the world, and most of the input from the surrounding environment is captured through the eyes. In Human-Computer Interaction too, gaze based interactions are gaining prominence, where the user’s gaze acts as an input to the system. Of late portable and inexpensive eye-tracking devices have made inroads in the market, opening up wider possibilities for interacting with a gaze. However, research on feedback to the gaze-based events is limited. This thesis proposes to study vibrotactile feedback to gaze-based interactions. This thesis presents a study conducted to evaluate different types of vibrotactile feedback and their role in response to a gaze-based event. For this study, an experimental setup was designed wherein when the user fixated the gaze on a functional object, vibrotactile feedback was provided either on the wrist or on the glasses. The study seeks to answer questions such as the helpfulness of vibrotactile feedback in identifying functional objects, user preference for the type of vibrotactile feedback, and user preference of the location of the feedback. The results of this study indicate that vibrotactile feedback was an important factor in identifying the functional object. The preference for the type of vibrotactile feedback was somewhat inconclusive as there were wide variations among the users over the type of vibrotactile feedback. The personal preference largely influenced the choice of location for receiving the feedback

    Gaze Awareness in Computer-Mediated Collaborative Physical Tasks

    Get PDF
    Human eyes play an important role in everyday social interactions. However, the cues provided by eye movements are often missing or difficult to interpret in computer-mediated remote collaboration. Motivated by the increasing availability of gaze-tracking devices in the consumer market and the growing need for improved remotecollaboration systems, this thesis evaluated the value of gaze awareness in a number of video-based remote-collaboration situations. This thesis comprises six publications which enhance our understanding of the everyday use of gaze-tracking technology and the value of shared gaze to remote collaborations in the physical world. The studies focused on a variety of collaborative scenarios involving different camera configurations (stationary, handheld, and head-mounted cameras), display setups (screen-based and projection displays), mobility requirements (stationary and mobile tasks), and task characteristics (pointing and procedural tasks). The aim was to understand the costs and benefits of shared gaze in video-based collaborative physical tasks. The findings suggest that gaze awareness is useful in remote collaboration for physical tasks. Shared gaze enables efficient communication of spatial information, helps viewers to predict task-relevant intentions, and enables improved situational awareness. However, different contextual factors can influence the utility of shared gaze. Shared gaze was more useful when the collaborative task involved communicating pointing information instead of procedural information, the collaborators were mutually aware of the shared gaze, and the quality of gaze-tracking was accurate enough to meet the task requirements. In addition, the results suggest that the collaborators’ roles can also affect the perceived utility of shared gaze. Methodologically, this thesis sets a precedent in shared gaze research by reporting the objective gaze data quality achieved in the studies and also provides tools for other researchers to objectively view gaze data quality in different research phases. The findings of this thesis can contribute towards designing future remote-collaboration systems; towards the vision of pervasive gaze-based interaction; and towards improved validity, repeatability, and comparability of research involving gaze trackers
    corecore