66,084 research outputs found

    Interaction Methods for Smart Glasses : A Survey

    Get PDF
    Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. Finally, we discuss several key design challenges and the possibility of multi-modal input for smart glasses.Peer reviewe

    Voice Interaction for Augmented Reality Navigation Interfaces with Natural Language Understanding

    Get PDF
    Voice interaction with natural language understanding (NLU) has been extensively explored in desktop computers, handheld devices, and human-robot interaction. However, there is limited research into voice interaction with NLU in augmented reality (AR). There are benefits of using voice interaction in AR, such as high naturalness and being hands-free. In this project, we introduce VOARLA, an NLU-powered AR voice interface, which navigate courier driver delivery a package. A user study was completed to evaluate VOARLA against an AR voice interface without NLU to investigate the effectiveness of NLU in the navigation interface in AR. We evaluated from three aspects: accuracy, productivity, and commands learning curve. Results found that using NLU in AR increases the accuracy of the interface by 15%. However, higher accuracy did not correlate to an increase in productivity. Results suggest that NLU helped users remember the commands on the first run when they were unfamiliar with the system. This suggests that using NLU in an AR hands-free application can make the learning curve easier for new users

    Evaluation of AI-Supported Input Methods in Augmented Reality Environment

    Full text link
    Augmented Reality (AR) solutions are providing tools that could improve applications in the medical and industrial fields. Augmentation can provide additional information in training, visualization, and work scenarios, to increase efficiency, reliability, and safety, while improving communication with other devices and systems on the network. Unfortunately, tasks in these fields often require both hands to execute, reducing the variety of input methods suitable to control AR applications. People with certain physical disabilities, where they are not able to use their hands, are also negatively impacted when using these devices. The goal of this work is to provide novel hand-free interfacing methods, using AR technology, in association with AI support approaches to produce an improved Human-Computer interaction solution

    Virtual Vouchers: Prototyping a Mobile Augmented Reality User Interface for Botanical Species Identification

    Get PDF
    Figure 1 : (a) Botanists gathering samples in the field. (b) View through a video see-though display of first prototype of the tangible augmented reality user interface. ABSTRACT As biodiversity research increases in importance and complexity, the tools that botanists require for field-work must evolve and take on new forms. Of particular importance is the ability to identify existing and new species in the field. Mobile augmented reality systems can make it possible to access, view, and inspect a large database of virtual species examples side-by-side with physical specimens. In this paper, we present prototypes of a mobile augmented reality electronic field guide and techniques for displaying and inspecting computer vision-based visual search results in the form of virtual vouchers. Our work addresses head-movement controlled augmented reality for hands-free interaction and tangible augmented reality. We describe results from our design and investigation process and discuss observations and feedback from lab trials by botanists

    FiAAR: An augmented reality firetruck equipment assembly and configuration assistant technology

    Get PDF
    Augmented reality (AR) is the technology that expands the physical world by enhancing the objects that reside in the real world with computer generated perceptual information to provide interactive experience in a real-world environment. AR is used effectively in many business sectors such as the engineering, education and training, medicine, logistics and transport, and others. Rescue services is one of the challenging areas where the use of AR technology has extremely high demands for robustness and ease of use. In this paper, we have introduced two augmented reality versions for the FiAAR project developed by Karlsruhe University of Applied Sciences. The first version is developed for Realmax HMT-1 device with hands free interaction utilizing speech recognition. The second version in turn focuses on hand recognition and it's developed for Realwear Qian device with Leap Motion sensor. Our intention is to show the potential of current AR technologies in demanding use cases

    AI-Powered Interfaces for Extended Reality to support Remote Maintenance

    Full text link
    High-end components that conduct complicated tasks automatically are a part of modern industrial systems. However, in order for these parts to function at the desired level, they need to be maintained by qualified experts. Solutions based on Augmented Reality (AR) have been established with the goal of raising production rates and quality while lowering maintenance costs. With the introduction of two unique interaction interfaces based on wearable targets and human face orientation, we are proposing hands-free advanced interactive solutions in this study with the goal of reducing the bias towards certain users. Using traditional devices in real time, a comparison investigation using alternative interaction interfaces is conducted. The suggested solutions are supported by various AI powered methods such as novel gravity-map based motion adjustment that is made possible by predictive deep models that reduce the bias of traditional hand- or finger-based interaction interface

    Object and Facial Recognition in Augmented and Virtual Reality: Investigation into Software, Hardware and Potential Uses

    Get PDF
    As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas

    Radi-Eye:Hands-Free Radial Interfaces for 3D Interaction using Gaze-Activated Head-Crossing

    Get PDF
    Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction
    corecore