5 research outputs found

    Jointly optimizing sensing pipelines for multimodal mixed reality interaction

    Get PDF
    National Research Foundation (NRF) Singapore under International Research Centres in Singapore Funding Initiative; Ministry of Education, Singapore under its Academic Research Funding Tier

    Remote and Deviceless Manipulation of Virtual Objects in Mixed Reality

    Get PDF
    Deviceless manipulation of virtual objects in mixed reality (MR) environments is technically achievable with the current generation of Head-Mounted Displays (HMDs), as they track finger movements and allow you to use gestures to control the transformation. However, when the object manipulation is performed at some distance, and when the transform includes scaling, it is not obvious how to remap the hand motions over the degrees of freedom of the object. Different solutions have been implemented in software toolkits, but there are still usability issues and a lack of clear guidelines for the interaction design. We present a user study evaluating three solutions for the remote translation, rotation, and scaling of virtual objects in the real environment without using handheld devices. We analyze their usability on the practical task of docking virtual cubes on a tangible shelf from varying distances. The outcomes of our study show that the usability of the methods is strongly affected by the use of separate or integrated control of the degrees of freedom, by the use of the hands in a symmetric or specialized way, by the visual feedback, and by the previous experience of the users

    Designing ray-pointing using real hand and touch-based in handheld augmented reality for object selection

    Get PDF
    Augmented Reality (AR) have been widely explored worldwide for their potential as a technology that enhances information representation. As technology progresses, smartphones (handheld devices) now have sophisticated processors and cameras for capturing static photographs and video, as well as a variety of sensors for tracking the user's position, orientation, and motion. Hence, this paper would discuss a finger-ray pointing technique in real-time for interaction in handheld AR and comparing the technique with the conventional technique in handheld, touch-screen interaction. The aim of this paper is to explore the ray pointing interaction in handheld AR for 3D object selection. Previous works in handheld AR and also covers Mixed Reality (MR) have been recapped

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    An empirical study of virtual reality menu interaction and design

    Get PDF
    This study focused on three different menu designs each with their own unique interactions and organizational structures to determine which design features would perform the best. Fifty-four participants completed 27 tasks using each of the three designs. The menus were analyzed based on task performance, accuracy, usability, intuitiveness, and user preference. Also, an analysis was conducted between two different menu organization styles: top-down menu organization (Method-TD) and bottom-up organization (Method-BU). There was no evidence that demographic factors had any effect on the overall results. By and large, the Stacked menu design received very positive results and feedback from all the participants. The Spatial design received average feedback with some participants preferring it while others struggled to use it and felt that it was too physically demanding. The worst performer was the Radial design that consistently ranked last and failed to pass usability and accuracy tests. A NGOMSL study was conducted to determine any differences in performance between a top-down menu organizational approach and a bottom-up approach or differences between the predicted task completion times and the reported times. The results of this study predicted that the Spatial design should have taken the least amount of time to perform, however, the experimental results showed that the Stacked design in fact out-performed the Spatial design’s task completion times. A potential explanation as to why the Stacked outperformed the Spatial is the increased physical demand of the Spatial design not anticipated with the NGOMSL analysis because of a design feature which caused a high level of cumbersomeness with the interactions. Overall, there were no statistical differences found between Method-TD and Method-BU, but a large difference found between the predicted times and observed times for Stacked, Radial, and Spatial. Participants overwhelmingly performed better than the predicted completion times for the Stacked design, but then did not complete the tasks by the predicted times for the Radial and Spatial. This study recommends the Stacked menu for VR environments and proposes further research into a Stacked-Spatial hybrid design to allow for the participant’s preferred design aspects of both designs to be used in a VR environment
    corecore