7,071 research outputs found

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Augmenting Graphical User Interfaces with Haptic Assistance for Motion-Impaired Operators

    Get PDF
    Haptic assistance is an emerging field of research that is designed to improve human-computer interaction (HCI) by reducing error rates and targeting times through the use of force feedback. Haptic feedback has previously been investigated to assist motion-impaired computer users, however, limitations such as target distracters have hampered its integration with graphical user interfaces (GUIs). In this paper two new haptic assistive techniques are presented that utilise the 3DOF capabilities of the Phantom Omni. These are referred to as deformable haptic cones and deformable virtual switches. The assistance is designed specifically to enable motion-impaired operators to use existing GUIs more effectively. Experiment 1 investigates the performance benefits of the new haptic techniques when used in conjunction with the densely populated Windows on-screen keyboard (OSK). Experiment 2 utilises the ISO 9241-9 point-and-click task to investigate the effects of target size and shape. The results of the study prove that the newly proposed techniques improve interaction rates and can be integrated with existing software without many of the drawbacks of traditional haptic assistance. Deformable haptic cones and deformable virtual switches were shown to reduce the mean number of missed-clicks by at least 75% and reduce targeting times by at least 25%

    Congestion Control for Network-Aware Telehaptic Communication

    Full text link
    Telehaptic applications involve delay-sensitive multimedia communication between remote locations with distinct Quality of Service (QoS) requirements for different media components. These QoS constraints pose a variety of challenges, especially when the communication occurs over a shared network, with unknown and time-varying cross-traffic. In this work, we propose a transport layer congestion control protocol for telehaptic applications operating over shared networks, termed as dynamic packetization module (DPM). DPM is a lossless, network-aware protocol which tunes the telehaptic packetization rate based on the level of congestion in the network. To monitor the network congestion, we devise a novel network feedback module, which communicates the end-to-end delays encountered by the telehaptic packets to the respective transmitters with negligible overhead. Via extensive simulations, we show that DPM meets the QoS requirements of telehaptic applications over a wide range of network cross-traffic conditions. We also report qualitative results of a real-time telepottery experiment with several human subjects, which reveal that DPM preserves the quality of telehaptic activity even under heavily congested network scenarios. Finally, we compare the performance of DPM with several previously proposed telehaptic communication protocols and demonstrate that DPM outperforms these protocols.Comment: 25 pages, 19 figure

    V-ANFIS for Dealing with Visual Uncertainty for Force Estimation in Robotic Surgery

    Get PDF
    Accurate and robust estimation of applied forces in Robotic-Assisted Minimally Invasive Surgery is a very challenging task. Many vision-based solutions attempt to estimate the force by measuring the surface deformation after contacting the surgical tool. However, visual uncertainty, due to tool occlusion, is a major concern and can highly affect the results' precision. In this paper, a novel design of an adaptive neuro-fuzzy inference strategy with a voting step (V-ANFIS) is used to accommodate with this loss of information. Experimental results show a significant accuracy improvement from 50% to 77% with respect to other proposals.Peer ReviewedPostprint (published version

    Haptic dancing: human performance at haptic decoding with a vocabulary

    Get PDF
    The inspiration for this study is the observation that swing dancing involves coordination of actions between two humans that can be accomplished by pure haptic signaling. This study implements a leader-follower dance to be executed between a human and a PHANToM haptic device. The data demonstrates that the participants' understanding of the motion as a random sequence of known moves informs their following, making this vocabulary-based interaction fundamentally different from closed loop pursuit tracking. This robot leader does not respond to the follower's movement other than to display error from a nominal path. This work is the first step in an investigation of the successful haptic coordination between dancers, which will inform a subsequent design of a truly interactive robot leader

    Beyond multimedia adaptation: Quality of experience-aware multi-sensorial media delivery

    Get PDF
    Multiple sensorial media (mulsemedia) combines multiple media elements which engage three or more of human senses, and as most other media content, requires support for delivery over the existing networks. This paper proposes an adaptive mulsemedia framework (ADAMS) for delivering scalable video and sensorial data to users. Unlike existing two-dimensional joint source-channel adaptation solutions for video streaming, the ADAMS framework includes three joint adaptation dimensions: video source, sensorial source, and network optimization. Using an MPEG-7 description scheme, ADAMS recommends the integration of multiple sensorial effects (i.e., haptic, olfaction, air motion, etc.) as metadata into multimedia streams. ADAMS design includes both coarse- and fine-grained adaptation modules on the server side: mulsemedia flow adaptation and packet priority scheduling. Feedback from subjective quality evaluation and network conditions is used to develop the two modules. Subjective evaluation investigated users' enjoyment levels when exposed to mulsemedia and multimedia sequences, respectively and to study users' preference levels of some sensorial effects in the context of mulsemedia sequences with video components at different quality levels. Results of the subjective study inform guidelines for an adaptive strategy that selects the optimal combination for video segments and sensorial data for a given bandwidth constraint and user requirement. User perceptual tests show how ADAMS outperforms existing multimedia delivery solutions in terms of both user perceived quality and user enjoyment during adaptive streaming of various mulsemedia content. In doing so, it highlights the case for tailored, adaptive mulsemedia delivery over traditional multimedia adaptive transport mechanisms
    corecore