30 research outputs found

    Generalized Trackball and 3D Touch Interaction

    Get PDF
    This thesis faces the problem of 3D interaction by means of touch and mouse input. We propose a multitouch enabled adaptation of the classical mouse based trackball interaction scheme. In addition we introduce a new interaction metaphor based on visiting the space around a virtual object remaining at a given distance. This approach allows an intuitive navigation of topologically complex shapes enabling unexperienced users to visit hard to be reached parts

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    An Inertial Device-based User Interaction with Occlusion-free Object Handling in a Handheld Augmented Reality

    Get PDF
    Augmented Reality (AR) is a technology used to merge virtual objects with real environments in real-time. In AR, the interaction which occurs between the end-user and the AR system has always been the frequently discussed topic. In addition, handheld AR is a new approach in which it delivers enriched 3D virtual objects when a user looks through the device’s video camera. One of the most accepted handheld devices nowadays is the smartphones which are equipped with powerful processors and cameras for capturing still images and video with a range of sensors capable of tracking location, orientation and motion of the user. These modern smartphones offer a sophisticated platform for implementing handheld AR applications. However, handheld display provides interface with the interaction metaphors which are developed with head-mounted display attached along and it might restrict with hardware which is inappropriate for handheld. Therefore, this paper will discuss a proposed real-time inertial device-based interaction technique for 3D object manipulation. It also explains the methods used such for selection, holding, translation and rotation. It aims to improve the limitation in 3D object manipulation when a user can hold the device with both hands without requiring the need to stretch out one hand to manipulate the 3D object. This paper will also recap of previous works in the field of AR and handheld AR. Finally, the paper provides the experimental results to offer new metaphors to manipulate the 3D objects using handheld devices

    Gaze-shifting:direct-indirect input with pen and touch modulated by gaze

    Get PDF
    Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input

    An Exploration of Multi-touch Interaction Techniques

    Get PDF
    Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique. I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders. To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications

    Reusing Ambient Light to Recognize Hand Gestures

    Get PDF
    In this paper, we explore the feasibility of reusing ambient light to recognize human gestures. We present GestureLite, a system that provides hand gesture detection and classification using the pre-existing light in a room. We observe that in an environment with a reasonably consistent lighting scheme, a given gesture will block some light rays and leave others unobstructed, resulting in the user casting a unique shadow pattern for that movement. GestureLite captures these unique shadow patterns using a small array of light sensors. Using standard machine learning techniques, GestureLite can learn these patterns and recognize new instances of specific gestures when the user performs them. We tested GestureLite using a 10-gesture dictionary in several real-world environments and found it achieves, on average, a gesture recognition accuracy of 98%

    Effizienz und Ergonomie von Multitouch-Interaktion : Studien und Prototypen zur Bewertung und Optimierung zentraler Interaktionstechniken

    Get PDF
    Die vorliegende Arbeit beschäftigt sich mit Grundfragen der Effektivität, Effizienz und Zufriedenheit von Multitouch-Interaktionen. Mithilfe einer Multitouch-Steuerung für 3D-Animation konnte gezeigt werden, dass selbst unerfahrene Multitouch-Nutzer in der Lage sind, hoch komplexe Aufgaben koordiniert und effizient zu lösen. Ein neu entwickeltes Koordinationsmaß bestätigt, dass Nutzer den Vorteil eines Multitouch nutzen, indem sie koordiniert mehrere Finger gleichzeitig für 3D-Animationen in Echtzeit einsetzen. In drei weiteren Studien zu zentralen Multitouch-Interaktionstechniken konnte gezeigt werden, dass die Originalformulierung von Fitts’ Gesetz nicht ausreicht, um die Effizienz von Multitouch-Interaktionen adäquat zu bewerten und zu analysieren. Fitts’ Gesetz ist ein Modell zur Vorhersage und Analyse von Interaktionszeiten und beinhaltet ursprünglich nur die Distanz der Interaktionsbewegung und die Zielgröße. Diese Arbeit zeigt, dass Vorhersagen mit Fitts’ Gesetz bessere Ergebnisse liefern, wenn sie neben diesen beiden Faktoren auch Bewegungsrichtung, Startpunkt der Bewegung und Neigung des Multitouch-Display berücksichtigen. Die Ergebnisse dieser Arbeitliefern Anhaltspunkte, um effiziente und benutzerfreundliche Interaktionstechniken zu entwickeln. Zudem könnten sie eingesetzt werden, um Analysen von Intertaktionstechniken für Multitouch teilautomatisch durchzuführen.This thesis deals with fundamental questions of efficiency, effectiveness and satisfaction of multitouch interactions. Using a novel multitouch interface for 3D animation it could be shown that even inexperienced multitouch users are capable of solving highly complex tasks in a coordinated and efficient way. A newly developed measure for coordination confirms that users take advantage of multitouch by using several fingers simultaneously to create a 3D real-time animation. In three additional studies on central interaction techniques for multitouch it was shown that the original Fitts’ law is not sufficient to adequately describe and analyse the efficiency of multitouch interactions. Fitts’ law is a model for the prediction and analysis of interaction time which originally only takes into account the distance of interaction movements and the target size. This work shows that predictions based on Fitts’ law provide better results when, in addition to these two factors, the direction of the movement, the starting point and the tilt of the display are considered, as well. The present results provide approaches to developing efficient interaction techniques with high usability. Furthermore, they can be used to conduct a semi-automatic analysis of interaction techniques for multitouch

    Automated Tracking of Hand Hygiene Stages

    Get PDF
    The European Centre for Disease Prevention and Control (ECDC) estimates that 2.5 millioncases of Hospital Acquired Infections (HAIs) occur each year in the European Union. Handhygiene is regarded as one of the most important preventive measures for HAIs. If it is implemented properly, hand hygiene can reduce the risk of cross-transmission of an infection in the healthcare environment. Good hand hygiene is not only important for healthcare settings. Therecent ongoing coronavirus pandemic has highlighted the importance of hand hygiene practices in our daily lives, with governments and health authorities around the world promoting goodhand hygiene practices. The WHO has published guidelines of hand hygiene stages to promotegood hand washing practices. A significant amount of existing research has focused on theproblem of tracking hands to enable hand gesture recognition. In this work, gesture trackingdevices and image processing are explored in the context of the hand washing environment.Hand washing videos of professional healthcare workers were carefully observed and analyzedin order to recognize hand features associated with hand hygiene stages that could be extractedautomatically. Selected hand features such as palm shape (flat or curved); palm orientation(palms facing or not); hand trajectory (linear or circular movement) were then extracted andtracked with the help of a 3D gesture tracking device - the Leap Motion Controller. These fea-tures were further coupled together to detect the execution of a required WHO - hand hygienestage,Rub hands palm to palm, with the help of the Leap sensor in real time. In certain conditions, the Leap Motion Controller enables a clear distinction to be made between the left andright hands. However, whenever the two hands came into contact with each other, sensor data from the Leap, such as palm position and palm orientation was lost for one of the two hands.Hand occlusion was found to be a major drawback with the application of the device to this usecase. Therefore, RGB digital cameras were selected for further processing and tracking of the hands. An image processing technique, using a skin detection algorithm, was applied to extractinstantaneous hand positions for further processing, to enable various hand hygiene poses to be detected. Contour and centroid detection algorithms were further applied to track the handtrajectory in hand hygiene video recordings. In addition, feature detection algorithms wereapplied to a hand hygiene pose to extract the useful hand features. The video recordings did not suffer from occlusion as is the case for the Leap sensor, but the segmentation of one handfrom another was identified as a major challenge with images because the contour detectionresulted in a continuous mass when the two hands were in contact. For future work, the datafrom gesture trackers, such as the Leap Motion Controller and cameras (with image processing)could be combined to make a robust hand hygiene gesture classification system

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application
    corecore