75,104 research outputs found

    Hand-based interface for augmented reality

    Get PDF
    Augmented reality (AR) is a highly interdisciplinary field which has received increasing attention since late 90s. Basically, it consists of a combination of the real scene viewed by a user and a computer generated image, running in real time. So, AR allows the user to see the real world supplemented, in general, with some information considered as useful, enhancing the users perception and knowledge of the environment. Benefits of reconfigurable hardware for AR have been explored by Luk et al. [4]. However, the wide majority of AR systems have been based so far on PCs or workstation

    A preliminary study of a hybrid user interface for augmented reality applications

    Get PDF
    Augmented Reality (AR) applications are nowadays largely diffused in many fields of use, especially for entertainment, and the market of AR applications for mobile devices grows faster and faster. Moreover, new and innovative hardware for human-computer interaction has been deployed, such as the Leap Motion Controller. This paper presents some preliminary results in the design and development of a hybrid interface for hand-free augmented reality applications. The paper introduces a framework to interact with AR applications through a speech and gesture recognition-based interface. A Leap Motion Controller is mounted on top of AR glasses and a speech recognition module completes the system. Results have shown that, using the speech or the gesture recognition modules singularly, the robustness of the user interface is strongly dependent on environmental conditions. On the other hand, a combined usage of both modules can provide a more robust input

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    An Evaluation of Virtual Lenses for Object Selection in Augmented Reality

    Get PDF
    This paper reports the results of an experiment to compare three different selection techniques in a tabletop tangible augmented reality interface. Object selection is an important task in all direct manipulation interfaces because it precedes most other manipulation and navigation actions. Previous work on tangible virtual lenses for visualisation has prompted the exploration of how selection techniques can be incorporated into these tools. In this paper a selection technique based on virtual lenses is compared with the traditional approaches of virtual hand and virtual pointer methods. The Lens technique is found to be faster, require less physical effort to use, and is preferred by participants over the other techniques. These results can be useful in guiding the development of future augmented reality interfaces

    A Portable Augmented Reality Science Laboratory

    Get PDF
    Augmented Reality (AR) is a technology which overlays virtual objects on the real world; generates three-dimensional (3D) virtual objects and provides an interactive interface which people can work in the real world and interact with 3D virtual objects at the same time. AR has the potential to engage and motivate learners to explore material from a variety of differing perspective, and has been shown to be particularly useful for teaching subject matter that students could not possibly experience first hand in the real world. This report provides a conceptual framework of a simulated augmented reality lab which could be used in teaching science in classrooms. The recent years, the importance of lab-based courses and its significant role in the science education is irrefutable. The use of AR in formal education could prove a key component in future learning environments that are richly populated with a blend of hardware and software applications. The aim of this project is to enhance the teaching and learning of science by complementing the existing traditional lab with the use of a simulated augmented reality lab. The system architecture and the technical aspects of the proposed project will be described. Implementation issues and benefits of the proposed AR Lab will be highlighted

    Computer Vision-Based Hand Tracking and 3D Reconstruction as a Human-Computer Input Modality with Clinical Application

    Get PDF
    The recent pandemic has impeded patients with hand injuries from connecting in person with their therapists. To address this challenge and improve hand telerehabilitation, we propose two computer vision-based technologies, photogrammetry and augmented reality as alternative and affordable solutions for visualization and remote monitoring of hand trauma without costly equipment. In this thesis, we extend the application of 3D rendering and virtual reality-based user interface to hand therapy. We compare the performance of four popular photogrammetry software in reconstructing a 3D model of a synthetic human hand from videos captured through a smartphone. The visual quality, reconstruction time and geometric accuracy of output model meshes are compared. Reality Capture produces the best result, with output mesh having the least error of 1mm and a total reconstruction time of 15 minutes. We developed an augmented reality app using MediaPipe algorithms that extract hand key points, finger joint coordinates and angles in real-time from hand images or live stream media. We conducted a study to investigate its input variability and validity as a reliable tool for remote assessment of finger range of motion. The intraclass correlation coefficient between DIGITS and in-person measurement obtained is 0.767- 0.81 for finger extension and 0.958–0.857 for finger flexion. Finally, we develop and surveyed the usability of a mobile application that collects patient data medical history, self-reported pain levels and hand 3D models and transfer them to therapists. These technologies can improve hand telerehabilitation, aid clinicians in monitoring hand conditions remotely and make decisions on appropriate therapy, medication, and hand orthoses

    AI-Powered Interfaces for Extended Reality to support Remote Maintenance

    Full text link
    High-end components that conduct complicated tasks automatically are a part of modern industrial systems. However, in order for these parts to function at the desired level, they need to be maintained by qualified experts. Solutions based on Augmented Reality (AR) have been established with the goal of raising production rates and quality while lowering maintenance costs. With the introduction of two unique interaction interfaces based on wearable targets and human face orientation, we are proposing hands-free advanced interactive solutions in this study with the goal of reducing the bias towards certain users. Using traditional devices in real time, a comparison investigation using alternative interaction interfaces is conducted. The suggested solutions are supported by various AI powered methods such as novel gravity-map based motion adjustment that is made possible by predictive deep models that reduce the bias of traditional hand- or finger-based interaction interface

    Procedural content creation in VR

    Get PDF
    3D content creation for virtual worlds is a difficult task, requiring specialized tools based on a WIMP interface for modelling, composition and animation. Natural interaction systems for modelling in augmented or virtual reality are currently being developed and studied, making use of pens, handheld controllers, voice commands, tracked hand gestures like pinching, tapping and dragging mid-air, etc. We propose a content creation approach for virtual reality, placing a focus on making procedural content generation (PCG) intuitive and generalizable. Our approach is to start with a library of 3D assets, with which the user populates an initially empty world by placing and replicating objects individually. The user can then construct procedural rules to automate this process on the fly, creating abstract entities that behave like a block of objects while still being treated and manipulated like other singleton objects. To this end, we design a rule system for procedural content generation adequate for virtual reality, including nested object replication, relative placement and spacing, and randomized selection. We then design and prototype a natural interaction model for virtual reality suited to this rule system, based on two-handed object manipulation, controller input and user voice commands. A prototype of this interaction model is built, and finally, a former user evaluation is conducted to assess its viability and identify avenues for improvement and future work

    Augmented reality meeting table: a novel multi-user interface for architectural design

    Get PDF
    Immersive virtual environments have received widespread attention as providing possible replacements for the media and systems that designers traditionally use, as well as, more generally, in providing support for collaborative work. Relatively little attention has been given to date however to the problem of how to merge immersive virtual environments into real world work settings, and so to add to the media at the disposal of the designer and the design team, rather than to replace it. In this paper we report on a research project in which optical see-through augmented reality displays have been developed together with prototype decision support software for architectural and urban design. We suggest that a critical characteristic of multi user augmented reality is its ability to generate visualisations from a first person perspective in which the scale of rendition of the design model follows many of the conventions that designers are used to. Different scales of model appear to allow designers to focus on different aspects of the design under consideration. Augmenting the scene with simulations of pedestrian movement appears to assist both in scale recognition, and in moving from a first person to a third person understanding of the design. This research project is funded by the European Commission IST program (IST-2000-28559)
    • 

    corecore