22 research outputs found

    Usability Analysis of an off-the-shelf Hand Posture Estimation Sensor for Freehand Physical Interaction in Egocentric Mixed Reality

    Get PDF
    This paper explores freehand physical interaction in egocentric Mixed Reality by performing a usability study on the use of hand posture estimation sensors. We report on precision, interactivity and usability metrics in a task-based user study, exploring the importance of additional visual cues when interacting. A total of 750 interactions were recorded from 30 participants performing 5 different interaction tasks (Move, Rotate: Pitch (Y axis) and Yaw (Z axis), Uniform scale: enlarge and shrink). Additional visual cues resulted in an average shorter time to interact, however, no consistent statistical differences were found in between groups for performance and precision results. The group with additional visual cues gave the system and average System Usability Scale (SUS) score of 72.33 (SD = 16.24) while the other scored a 68.0 (SD = 18.68). Overall, additional visual cues made the system being perceived as more usable, despite the fact that the use of these two different conditions had limited effect on precision and interactivity metrics

    Natural freehand grasping of virtual objects for augmented reality

    Get PDF
    Grasping is a primary form of interaction with the surrounding world, and is an intuitive interaction technique by nature due to the highly complex structure of the human hand. Translating this versatile interaction technique to Augmented Reality (AR) can provide interaction designers with more opportunities to implement more intuitive and realistic AR applications. The work presented in this thesis uses quantifiable measures to evaluate the accuracy and usability of natural grasping of virtual objects in AR environments, and presents methods for improving this natural form of interaction. Following a review of physical grasping parameters and current methods of mediating grasping interactions in AR, a comprehensive analysis of natural freehand grasping of virtual objects in AR is presented to assess the accuracy, usability and transferability of this natural form of grasping to AR environments. The analysis is presented in four independent user studies (120 participants, 30 participants for each study and 5760 grasping tasks in total), where natural freehand grasping performance is assessed for a range of virtual object sizes, positions and types in terms of accuracy of grasping, task completion time and overall system usability. Findings from the first user study in this work highlighted two key problems for natural grasping in AR; namely inaccurate depth estimation and inaccurate size estimation of virtual objects. Following the quantification of these errors, three different methods for mitigating user errors and assisting users during natural grasping were presented and analysed; namely dual view visual feedback, drop shadows and additional visual feedback when adding user based tolerances during interaction tasks. Dual view visual feedback was found to significantly improve user depth estimation, however this method also significantly increased task completion time. Drop shadows provided an alternative, and a more usable solution, to dual view visual feedback through significantly improving depth estimation, task completion time and the overall usability of natural grasping. User based tolerances negated the fundamental problem of inaccurate size estimation of virtual objects, through enabling users to perform natural grasping without the need of being highly accurate in their grasping performance, thus providing evidence that natural grasping can be usable in task based AR environments. Finally recommendations for allowing and further improving natural grasping interaction in AR environments are provided, along with guidelines for translating this form of natural grasping to other AR environments and user interfaces

    Freehand Grasping: An Analysis of Grasping for Docking Tasks in Virtual Reality

    Get PDF
    Natural and intuitive interaction in VR as grasping virtual objects, is still a significant challenge and while recent studies have begun to explore interactions that aim to seamlessly create virtual environments that mimic reality as closely as possible, the dexterous versatility of the human grasp poses significant challenges for usable and intuitive interactions. At present the design considerations for creating natural grasping based interactions in VR are usually drawn from the body of historical knowledge presented for real object grasping. While this may be suitable for some applications, recent work has shown that users in VR grasp virtual objects differently than they would grasp real objects. Therefore, these interaction assumptions may not be directly applicable in furthering the natural interface for users of VR, presenting an absence of knowledge on how users intuitively grasp virtual objects. To begin to address this, we present two experiments where participants (N=39) grasped 16 virtual objects categorised by shape in a mixed docking task exploring rotation, placement and target location. We report on a Wizard of Oz methodology and extract grasp types, grasp category and grasp dimension. We further provide insights into virtual object categorisation for assessing interaction patterns and how these could be used for developing natural and intuitive grasp models by parameterizing grasp types found in these experiments. Our results are of value to be taken forward into a framework of recommendations for grasping interactions and thus begin to bridge the gap in understanding natural grasping patters for VR object interactions

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die BerĂŒcksichtigung verĂ€nderter Schwerkraftbedingungen

    Force-Aware Interface via Electromyography for Natural VR/AR Interaction

    Full text link
    While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.Comment: ACM Transactions on Graphics (SIGGRAPH Asia 2022

    Methodology for the Field Evaluation of the Impact of Augmented Reality Tools for Maintenance Workers in the Aeronautic Industry

    Get PDF
    Augmented Reality (AR) enhances the comprehension of complex situations by making the handling of contextual information easier. Maintenance activities in aeronautics consist of complex tasks carried out on various high-technology products under severe constraints from the sector and work environment. AR tools appear to be a potential solution to improve interactions between workers and technical data to increase the productivity and the quality of aeronautical maintenance activities. However, assessments of the actual impact of AR on industrial processes are limited due to a lack of methods and tools to assist in the integration and evaluation of AR tools in the field. This paper presents a method for deploying AR tools adapted to maintenance workers and for selecting relevant evaluation criteria of the impact in an industrial context. This method is applied to design an AR tool for the maintenance workshop, to experiment on real use cases, and to observe the impact of AR on productivity and user satisfaction for all worker profiles. Further work aims to generalize the results to the whole maintenance process in the aeronautical industry. The use of the collected data should enable the prediction of the impact of AR for related maintenance activities

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusĂ€tzliche Herausforderungen: Diverse EingabegerĂ€te mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. DarĂŒber hinaus zwingt der eingeschrĂ€nkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurĂŒckzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusĂ€tzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. GrĂ¶ĂŸe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen BedĂŒrfnisse der Benutzer zu berĂŒcksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und ProduktivitĂ€t von VR zu erhöhen. ZunĂ€chst werden PC-basierte Hardware und Software in die virtuelle Welt ĂŒbertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen GerĂ€ten, z.B. Tastatur und Tablet, und ein VR-Modus fĂŒr Anwendungen ermöglichen es dem Benutzer reale FĂ€higkeiten in die virtuelle Welt zu ĂŒbertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-GerĂ€te mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung fĂŒr die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. DarĂŒber hinaus werden personalisierte rĂ€umliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale PrĂ€senz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen FĂ€higkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewĂ€hrleisten. DarĂŒber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare EinschrĂ€nkungen der realen Welt zu ĂŒberwinden und das Erlebnis von VR-Umgebungen zu steigern

    A Body-and-Mind-Centric Approach to Wearable Personal Assistants

    Get PDF

    Real-time 3D hand reconstruction in challenging scenes from a single color or depth camera

    Get PDF
    Hands are one of the main enabling factors for performing complex tasks and humans naturally use them for interactions with their environment. Reconstruction and digitization of 3D hand motion opens up many possibilities for important applications. Hands gestures can be directly used for human–computer interaction, which is especially relevant for controlling augmented or virtual reality (AR/VR) devices where immersion is of utmost importance. In addition, 3D hand motion capture is a precondition for automatic sign-language translation, activity recognition, or teaching robots. Different approaches for 3D hand motion capture have been actively researched in the past. While being accurate, gloves and markers are intrusive and uncomfortable to wear. Hence, markerless hand reconstruction based on cameras is desirable. Multi-camera setups provide rich input, however, they are hard to calibrate and lack the flexibility for mobile use cases. Thus, the majority of more recent methods uses a single color or depth camera which, however, makes the problem harder due to more ambiguities in the input. For interaction purposes, users need continuous control and immediate feedback. This means the algorithms have to run in real time and be robust in uncontrolled scenes. These requirements, achieving 3D hand reconstruction in real time from a single camera in general scenes, make the problem significantly more challenging. While recent research has shown promising results, current state-of-the-art methods still have strong limitations. Most approaches only track the motion of a single hand in isolation and do not take background-clutter or interactions with arbitrary objects or the other hand into account. The few methods that can handle more general and natural scenarios run far from real time or use complex multi-camera setups. Such requirements make existing methods unusable for many aforementioned applications. This thesis pushes the state of the art for real-time 3D hand tracking and reconstruction in general scenes from a single RGB or depth camera. The presented approaches explore novel combinations of generative hand models, which have been used successfully in the computer vision and graphics community for decades, and powerful cutting-edge machine learning techniques, which have recently emerged with the advent of deep learning. In particular, this thesis proposes a novel method for hand tracking in the presence of strong occlusions and clutter, the first method for full global 3D hand tracking from in-the-wild RGB video, and a method for simultaneous pose and dense shape reconstruction of two interacting hands that, for the first time, combines a set of desirable properties previously unseen in the literature.HĂ€nde sind einer der Hauptfaktoren fĂŒr die AusfĂŒhrung komplexer Aufgaben, und Menschen verwenden sie auf natĂŒrliche Weise fĂŒr Interaktionen mit ihrer Umgebung. Die Rekonstruktion und Digitalisierung der 3D-Handbewegung eröffnet viele Möglichkeiten fĂŒr wichtige Anwendungen. Handgesten können direkt als Eingabe fĂŒr die Mensch-Computer-Interaktion verwendet werden. Dies ist insbesondere fĂŒr GerĂ€te der erweiterten oder virtuellen RealitĂ€t (AR / VR) relevant, bei denen die Immersion von grĂ¶ĂŸter Bedeutung ist. DarĂŒber hinaus ist die Rekonstruktion der 3D Handbewegung eine Voraussetzung zur automatischen Übersetzung von GebĂ€rdensprache, zur AktivitĂ€tserkennung oder zum Unterrichten von Robotern. In der Vergangenheit wurden verschiedene AnsĂ€tze zur 3D-Handbewegungsrekonstruktion aktiv erforscht. Handschuhe und physische Markierungen sind zwar prĂ€zise, aber aufdringlich und unangenehm zu tragen. Daher ist eine markierungslose Handrekonstruktion auf der Basis von Kameras wĂŒnschenswert. Multi-Kamera-Setups bieten umfangreiche Eingabedaten, sind jedoch schwer zu kalibrieren und haben keine FlexibilitĂ€t fĂŒr mobile AnwendungsfĂ€lle. Daher verwenden die meisten neueren Methoden eine einzelne Farb- oder Tiefenkamera, was die Aufgabe jedoch schwerer macht, da mehr AmbiguitĂ€ten in den Eingabedaten vorhanden sind. FĂŒr Interaktionszwecke benötigen Benutzer kontinuierliche Kontrolle und sofortiges Feedback. Dies bedeutet, dass die Algorithmen in Echtzeit ausgefĂŒhrt werden mĂŒssen und robust in unkontrollierten Szenen sein mĂŒssen. Diese Anforderungen, 3D-Handrekonstruktion in Echtzeit mit einer einzigen Kamera in allgemeinen Szenen, machen das Problem erheblich schwieriger. WĂ€hrend neuere Forschungsarbeiten vielversprechende Ergebnisse gezeigt haben, weisen aktuelle Methoden immer noch EinschrĂ€nkungen auf. Die meisten AnsĂ€tze verfolgen die Bewegung einer einzelnen Hand nur isoliert und berĂŒcksichtigen keine alltĂ€glichen Umgebungen oder Interaktionen mit beliebigen Objekten oder der anderen Hand. Die wenigen Methoden, die allgemeinere und natĂŒrlichere Szenarien verarbeiten können, laufen nicht in Echtzeit oder verwenden komplexe Multi-Kamera-Setups. Solche Anforderungen machen bestehende Verfahren fĂŒr viele der oben genannten Anwendungen unbrauchbar. Diese Dissertation erweitert den Stand der Technik fĂŒr die Echtzeit-3D-Handverfolgung und -Rekonstruktion in allgemeinen Szenen mit einer einzelnen RGB- oder Tiefenkamera. Die vorgestellten Algorithmen erforschen neue Kombinationen aus generativen Handmodellen, die seit Jahrzehnten erfolgreich in den Bereichen Computer Vision und Grafik eingesetzt werden, und leistungsfĂ€higen innovativen Techniken des maschinellen Lernens, die vor kurzem mit dem Aufkommen neuronaler Netzwerke entstanden sind. In dieser Arbeit werden insbesondere vorgeschlagen: eine neuartige Methode zur Handbewegungsrekonstruktion bei starken Verdeckungen und in unkontrollierten Szenen, die erste Methode zur Rekonstruktion der globalen 3D Handbewegung aus RGB-Videos in freier Wildbahn und die erste Methode zur gleichzeitigen Rekonstruktion von Handpose und -form zweier interagierender HĂ€nde, die eine Reihe wĂŒnschenwerter Eigenschaften komibiniert
    corecore