9,288 research outputs found

    Enhanced Learning of Jazz Chords with a Projector Based Piano Keyboard Augmentation

    Get PDF
    Published version: https://www.springer.com/gp/book/9783030353421acceptedVersio

    X-RAY VISION: APPLICATION OF AUGMENTED REALITY IN AVIATION MAINTENANCE TO SIMPLIFY TASKS INHIBITED BY OCCLUSION

    Get PDF
    This thesis examined the potential applications of augmented or mixed reality (AR/MR) technology and leveraging them in the context of the aviation maintenance community. Specifically, we examined whether using the 3D mapping and real-time space tracking technology of devices like the Microsoft HoloLens 2 can be used to make maintenance tasks easier in environments where the maintainer is not able to see into their workspace. With the complexities of aircraft construction, the prevalence of narrow, tight fitting spaces that are blocked by walls or obstructions is common. In the past, aviation maintainers have had to rely on memorizing 2D diagrams and feeling around dark, cramped spaces in order to determine where certain parts are located. Previous research in the field of AR primarily focuses on comparing AR methods to traditional methods for different types of tasks in simulacra. There is a lack of research in the specific application of AR that addresses occlusion introduced into these tasks. By conducting trials of simulated maintenance in an occluded area using AR technology, we found that the novice maintainer increased the accuracy of performance and decreased maintenance time when compared to traditional methods, while providing a subjectively easier method for instruction.Captain, United States Marine CorpsApproved for public release. Distribution is unlimited

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Gaze Guidance, Task-Based Eye Movement Prediction, and Real-World Task Inference using Eye Tracking

    Get PDF
    The ability to predict and guide viewer attention has important applications in computer graphics, image understanding, object detection, visual search and training. Human eye movements provide insight into the cognitive processes involved in task performance and there has been extensive research on what factors guide viewer attention in a scene. It has been shown, for example, that saliency in the image, scene context, and task at hand play significant roles in guiding attention. This dissertation presents and discusses research on visual attention with specific focus on the use of subtle visual cues to guide viewer gaze and the development of algorithms to predict the distribution of gaze about a scene. Specific contributions of this work include: a framework for gaze guidance to enable problem solving and spatial learning, a novel algorithm for task-based eye movement prediction, and a system for real-world task inference using eye tracking. A gaze guidance approach is presented that combines eye tracking with subtle image-space modulations to guide viewer gaze about a scene. Several experiments were conducted using this approach to examine its impact on short-term spatial information recall, task sequencing, training, and password recollection. A model of human visual attention prediction that uses saliency maps, scene feature maps and task-based eye movements to predict regions of interest was also developed. This model was used to automatically select target regions for active gaze guidance to improve search task performance. Finally, we develop a framework for inferring real-world tasks using image features and eye movement data. Overall, this dissertation naturally leads to an overarching framework, that combines all three contributions to provide a continuous feedback system to improve performance on repeated visual search tasks. This research has important applications in data visualization, problem solving, training, and online education

    Photorealistic retrieval of occluded facial information using a performance-driven face model

    Get PDF
    Facial occlusions can cause both human observers and computer algorithms to fail in a variety of important tasks such as facial action analysis and expression classification. This is because the missing information is not reconstructed accurately enough for the purpose of the task in hand. Most current computer methods that are used to tackle this problem implement complex three-dimensional polygonal face models that are generally timeconsuming to produce and unsuitable for photorealistic reconstruction of missing facial features and behaviour. In this thesis, an image-based approach is adopted to solve the occlusion problem. A dynamic computer model of the face is used to retrieve the occluded facial information from the driver faces. The model consists of a set of orthogonal basis actions obtained by application of principal component analysis (PCA) on image changes and motion fields extracted from a sequence of natural facial motion (Cowe 2003). Examples of occlusion affected facial behaviour can then be projected onto the model to compute coefficients of the basis actions and thus produce photorealistic performance-driven animations. Visual inspection shows that the PCA face model recovers aspects of expressions in those areas occluded in the driver sequence, but the expression is generally muted. To further investigate this finding, a database of test sequences affected by a considerable set of artificial and natural occlusions is created. A number of suitable metrics is developed to measure the accuracy of the reconstructions. Regions of the face that are most important for performance-driven mimicry and that seem to carry the best information about global facial configurations are revealed using Bubbles, thus in effect identifying facial areas that are most sensitive to occlusions. Recovery of occluded facial information is enhanced by applying an appropriate scaling factor to the respective coefficients of the basis actions obtained by PCA. This method improves the reconstruction of the facial actions emanating from the occluded areas of the face. However, due to the fact that PCA produces bases that encode composite, correlated actions, such an enhancement also tends to affect actions in non-occluded areas of the face. To avoid this, more localised controls for facial actions are produced using independent component analysis (ICA). Simple projection of the data onto an ICA model is not viable due to the non-orthogonality of the extracted bases. Thus occlusion-affected mimicry is first generated using the PCA model and then enhanced by accordingly manipulating the independent components that are subsequently extracted from the mimicry. This combination of methods yields significant improvements and results in photorealistic reconstructions of occluded facial actions

    An evaluation of the Microsoft HoloLens for a manufacturing-guided assembly task

    Get PDF
    Many studies have confirmed the benefits of using Augmented Reality (AR) work instructions over traditional digital or paper instructions, but few have compared the effects of different AR hardware for complex assembly tasks. For this research, previously published data using Desktop Model Based Instructions (MBI), Tablet MBI, and Tablet AR instructions were compared to new assembly data collected using AR instructions on the Microsoft HoloLens Head Mounted Display (HMD). Participants completed a mock wing assembly task, and measures like completion time, error count, Net Promoter Score, and qualitative feedback were recorded. The HoloLens condition yielded faster completion times than all other conditions. HoloLens users also had lower error rates than those who used the non-AR conditions. Despite the performance benefits of the HoloLens AR instructions, users of this condition reported lower net promoter scores than users of the Tablet AR instructions. The qualitative data showed that some users thought the HoloLens device was uncomfortable and that the tracking was not always exact. Although the user feedback favored the Tablet AR condition, the HoloLens condition resulted in significantly faster assembly times. As a result, it is recommended to use the HoloLens for complex guided assembly instructions with minor changes, such as allowing the user to toggle the AR instructions on and off at will. The results of this paper can help manufacturing stakeholders better understand the benefits of different AR technology for manual assembly tasks

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Egocentric Perception of Hands and Its Applications

    Get PDF
    corecore