13,777 research outputs found

    Development and Evaluation of a Learning-based Model for Real-time Haptic Texture Rendering

    Full text link
    Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The result of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need for learning a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface.Comment: 10 pages, 8 figure

    User quality of experience of mulsemedia applications

    Get PDF
    User Quality of Experience (QoE) is of fundamental importance in multimedia applications and has been extensively studied for decades. However, user QoE in the context of the emerging multiple-sensorial media (mulsemedia) services, which involve different media components than the traditional multimedia applications, have not been comprehensively studied. This article presents the results of subjective tests which have investigated user perception of mulsemedia content. In particular, the impact of intensity of certain mulsemedia components including haptic and airflow on user-perceived experience are studied. Results demonstrate that by making use of mulsemedia the overall user enjoyment levels increased by up to 77%

    Beyond multimedia adaptation: Quality of experience-aware multi-sensorial media delivery

    Get PDF
    Multiple sensorial media (mulsemedia) combines multiple media elements which engage three or more of human senses, and as most other media content, requires support for delivery over the existing networks. This paper proposes an adaptive mulsemedia framework (ADAMS) for delivering scalable video and sensorial data to users. Unlike existing two-dimensional joint source-channel adaptation solutions for video streaming, the ADAMS framework includes three joint adaptation dimensions: video source, sensorial source, and network optimization. Using an MPEG-7 description scheme, ADAMS recommends the integration of multiple sensorial effects (i.e., haptic, olfaction, air motion, etc.) as metadata into multimedia streams. ADAMS design includes both coarse- and fine-grained adaptation modules on the server side: mulsemedia flow adaptation and packet priority scheduling. Feedback from subjective quality evaluation and network conditions is used to develop the two modules. Subjective evaluation investigated users' enjoyment levels when exposed to mulsemedia and multimedia sequences, respectively and to study users' preference levels of some sensorial effects in the context of mulsemedia sequences with video components at different quality levels. Results of the subjective study inform guidelines for an adaptive strategy that selects the optimal combination for video segments and sensorial data for a given bandwidth constraint and user requirement. User perceptual tests show how ADAMS outperforms existing multimedia delivery solutions in terms of both user perceived quality and user enjoyment during adaptive streaming of various mulsemedia content. In doing so, it highlights the case for tailored, adaptive mulsemedia delivery over traditional multimedia adaptive transport mechanisms

    Congestion Control for Network-Aware Telehaptic Communication

    Full text link
    Telehaptic applications involve delay-sensitive multimedia communication between remote locations with distinct Quality of Service (QoS) requirements for different media components. These QoS constraints pose a variety of challenges, especially when the communication occurs over a shared network, with unknown and time-varying cross-traffic. In this work, we propose a transport layer congestion control protocol for telehaptic applications operating over shared networks, termed as dynamic packetization module (DPM). DPM is a lossless, network-aware protocol which tunes the telehaptic packetization rate based on the level of congestion in the network. To monitor the network congestion, we devise a novel network feedback module, which communicates the end-to-end delays encountered by the telehaptic packets to the respective transmitters with negligible overhead. Via extensive simulations, we show that DPM meets the QoS requirements of telehaptic applications over a wide range of network cross-traffic conditions. We also report qualitative results of a real-time telepottery experiment with several human subjects, which reveal that DPM preserves the quality of telehaptic activity even under heavily congested network scenarios. Finally, we compare the performance of DPM with several previously proposed telehaptic communication protocols and demonstrate that DPM outperforms these protocols.Comment: 25 pages, 19 figure

    Virtual bloXing - assembly rapid prototyping for near net shapes

    Get PDF
    Virtual reality (VR) provides another dimension to many engineering applications. Its immersive and interactive nature allows an intuitive approach to study both cognitive activities and performance evaluation. Market competitiveness means having products meet form, fit and function quickly. Rapid Prototyping and Manufacturing (RP&M) technologies are increasingly being applied to produce functional prototypes and the direct manufacturing of small components. Despite its flexibility, these systems have common drawbacks such as slow build rates, a limited number of build axes (typically one) and the need for post processing. This paper presents a Virtual Assembly Rapid Prototyping (VARP) project which involves evaluating cognitive activities in assembly tasks based on the adoption of immersive virtual reality along with a novel nonlayered rapid prototyping for near net shape (NNS) manufacturing of components. It is envisaged that this integrated project will facilitate a better understanding of design for manufacture and assembly by utilising equivalent scale digital and physical prototyping in one rapid prototyping system. The state of the art of the VARP project is also presented in this paper

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Virtual assembly rapid prototyping of near net shapes

    Get PDF
    Virtual reality (VR) provides another dimension to many engineering applications. Its immersive and interactive nature allows an intuitive approach to study both cognitive activities and performance evaluation. Market competitiveness means having products meet form, fit and function quickly. Rapid Prototyping and Manufacturing (RP&M) technologies are increasingly being applied to produce functional prototypes and the direct manufacturing of small components. Despite its flexibility, these systems have common drawbacks such as slow build rates, a limited number of build axes (typically one) and the need for post processing. This paper presents a Virtual Assembly Rapid Prototyping (VARP) project which involves evaluating cognitive activities in assembly tasks based on the adoption of immersive virtual reality along with a novel non-layered rapid prototyping for near net shape (NNS) manufacturing of components. It is envisaged that this integrated project will facilitate a better understanding of design for manufacture and assembly by utilising equivalent scale digital and physical prototyping in one rapid prototyping system. The state of the art of the VARP project is also presented in this paper
    corecore