1,538 research outputs found

    Usability of immersive virtual reality input devices

    Get PDF
    This research conducts a usability analysis of human interface devices within an Immersive Virtual Reality Environment. The analysis is carried out for two different interface devices, a commercially available Intersense © Wand and a home built pinch glove and wireless receiver. Users were asked to carry out a series of minor tasks involving placement of shaped blocks into corresponding holes within an Immersive Virtual Reality Environment. Performance was evaluated in terms of speed, accuracy and precision via the collection of completion times, errors made and the precision of motion during the experiment

    Enabling pervasive computing with smart phones

    Get PDF
    The authors discuss their experience with a number of mobile telephony projects carried out in the context of the European Union Information Society Technologies research program, which aims to develop mobile information services. They identify areas where use of smart phones can enable pervasive computing and offer practical advice in terms of lessons learned. To this end, they first look at the mobile telephone as * the end point of a mobile information service,* the control device for ubiquitous systems management and configuration,* the networking hub for personal and body area networks, and* identification tokens.They conclude with a discussion of business and practical issues that play a significant role in deploying research systems in realistic situations

    Snap2Diverse: Coordinating Information Visualizations and Virtual Environments

    Get PDF
    The field of Information Visualization is concerned with improving with how users perceive, understand, and interact with visual representations of data sets. Immersive Virtual Environments (VEs) excel at providing researchers and designers a greater comprehension of the spatial features and relations of their data, models, and scenes. This project addresses the intersection of these two fields where information is visualized in a virtual environment. Specifically we are interested in visualizing abstract information in relation to spatial information in the context of a virtual environment. We describe a set of design issues for this type of integrated visualization and demonstrate a coordinated, multiple-views system supporting 2D and 3D visualization tasks such as overview, navigation, details-on-demand, and brushing-and-linking selection. Software architecture issues are discussed with details of our implementation applied to the domain of chemical information and visualization. Lastly, we subject our system to an informal usability evaluation and identify usability issues with interaction and navigation that may guide future work in these situations

    Multimodal fusion : gesture and speech input in augmented reality environment

    Get PDF
    Augmented Reality (AR) has the capability to interact with the virtual objects and physical objects simultaneously since it combines the real world with virtual world seamlessly. However, most AR interface applies conventional Virtual Reality (VR) interaction techniques without modification. In this paper we explore the multimodal fusion for AR with speech and hand gesture input. Multimodal fusion enables users to interact with computers through various input modalities like speech, gesture, and eye gaze. At the first stage to propose the multimodal interaction, the input modalities are decided to be selected before be integrated in an interface. The paper presents several related works about to recap the multimodal approaches until it recently has been one of the research trends in AR. It presents the assorted existing works in multimodal for VR and AR. In AR, multimodal considers as the solution to improve the interaction between the virtual and physical entities. It is an ideal interaction technique for AR applications since AR supports interactions in real and virtual worlds in the real-time. This paper describes the recent studies in AR developments that appeal gesture and speech inputs. It looks into multimodal fusion and its developments, followed by the conclusion.This paper will give a guideline on multimodal fusion on how to integrate the gesture and speech inputs in AR environment

    Design of a Haptic Interface for Medical Applications using Magneto-Rheological Fluid based Actuators

    Get PDF
    This thesis reports on the design, construction, and evaluation of a prototype two degrees-of-freedom (DOF) haptic interface, which takes advantage of Magneto-Rheological Fluid (MRF) based clutches for actuation. Haptic information provides important cues in teleoperated systems and enables the user to feel the interaction with a remote or virtual environment during teleoperation. The two main objectives in designing a haptic interface are stability and transparency. Indeed, deficiencies in these factors in haptics-enabled telerobotic systems has the introduction of haptics in medical environments where safety and reliability are prime considerations. An actuator with poor dynamics, high inertia, large size, and heavy weight can significantly undermine the stability and transparency of a teleoperated system. In this work, the potential benefits of MRF-based actuators to the field of haptics in medical applications are studied. Devices developed with such fluids are known to possess superior mechanical characteristics over conventional servo systems. These characteristics significantly contribute to improved stability and transparency of haptic devices. This idea is evaluated and verified through both theoretical and experimental points of view. The design of a small-scale MRF-based clutch, suitable for a multi-DOF haptic interface, is discussed and its performance is compared with conventional servo systems. This design is developed into four prototype clutches. In addition, a closed-loop torque control strategy is presented. The feedback signal used in this control scheme comes from the magnetic field acquired from embedded Hall sensors in the clutch. The controller uses this feedback signal to compensate for the nonlinear behavior using an estimated model, based on Artificial Neural Networks. Such a control strategy eliminates the need for torque sensors for providing feedback signals. The performance of the developed design and the effectiveness of the proposed modeling and control techniques are experimentally validated. Next, a 2-DOF haptic interface based on a distributed antagonistic configuration of MRF-based clutches is constructed for a class of medical applications. This device is incorporated in a master-slave teleoperation setup that is used for applications involving needle insertion and soft-tissue palpation. Phantom and in vitro animal tissue were used to assess the performance of the haptic interface. The results show a great potential of MRF-based actuators for integration in haptic devices for medical interventions that require reliable, safe, accurate, highly transparent, and stable force reflection

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Efficient & Effective Selective Query Rewriting with Efficiency Predictions

    Get PDF
    To enhance effectiveness, a user's query can be rewritten internally by the search engine in many ways, for example by applying proximity, or by expanding the query with related terms. However, approaches that benefit effectiveness often have a negative impact on efficiency, which has impacts upon the user satisfaction, if the query is excessively slow. In this paper, we propose a novel framework for using the predicted execution time of various query rewritings to select between alternatives on a per-query basis, in a manner that ensures both effectiveness and efficiency. In particular, we propose the prediction of the execution time of ephemeral (e.g., proximity) posting lists generated from uni-gram inverted index posting lists, which are used in establishing the permissible query rewriting alternatives that may execute in the allowed time. Experiments examining both the effectiveness and efficiency of the proposed approach demonstrate that a 49% decrease in mean response time (and 62% decrease in 95th-percentile response time) can be attained without significantly hindering the effectiveness of the search engine

    VizLab: The Design and Implementation of An Immersive Virtual Environment System Using Game Engine Technology and Open Source Software

    Get PDF
    Virtual Reality (VR) is a term used to describe computer-simulated environments that can immerse users in a real or unreal world. Immersive systems are an essential component when experiencing virtual environments. Developing VR applications is time-consuming, and developers use many resources in creating VR applications. The separate components require integration, and the challenges in using public domain open source software present complex software development. The VizLab Virtual Reality System was created to meet these challenges and provide an integrated suite of tools for VR system development. VizLab supports the development of VR applications by using game engine and CAVE system technology. The system consists of software modules that provide rendering, texturing, collision, physics, window/viewport management, cluster synchronization, input management, multi-processing, stereoscopic 3D, and networking. VizLab combines the main functional aspects of a game engine and CAVE system for an improved approach to developing VR applications, virtual environments, and immersive environments

    Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays

    Get PDF
    Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video seethrough AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.National Science Foundation (U.S.) (Graduate Research Fellowship Grant 1122374
    corecore