6 research outputs found

    3D sound for simulation of arthroscopic surgery

    Get PDF
    Arthroscopic surgery offers many advantages compared to traditional surgery. Nevertheless, the required skills to practice this kind of surgery need specific training. Surgery simulators are used to train surgeon apprentices to practice specific gestures. In this paper, we present a study showing the contribution of 3D sound in assisting the triangulation gesture in arthroscopic surgery simulation. This ability refers to the capacity of the subject to manipulate the instruments while having a modified and limited view provided by the video camera of the simulator. Our approach, based on the use of 3D sound metaphors, provides interaction cues to the subjects about the real position of the instrument. The paper reports a performance evaluation study based on the perception of 3D sound integrated in the process of training of surgical task. Despite the fact that 3D sound cueing was not shown useful to all subjects in terms of execution time, the results of the study revealed that the majority of subjects who participated to the experiment confirmed the added value of 3D sound in terms of ease of use

    Towards Understanding and Developing Virtual Environments to Increase Accessibilities for People with Visual Impairments

    Get PDF
    The primary goal of this research is to investigate the possibilities of utilizing audio feedback to support effective Human-Computer Interaction Virtual Environments (VEs) without visual feedback for people with Visual Impairments. Efforts have been made to apply virtual reality (VR) technology for training and educational applications for diverse population groups, such as children and stroke patients. Those applications had already shown effects of increasing motivations, providing safer training environments and more training opportunities. However, they are all based on visual feedback. With the head related transfer functions (HRTFs), it is possible to design and develop considerably safer, but diversified training environments that might greatly benefit individuals with VI. In order to explore this, I ran three studies sequentially: 1) if/how users could navigate themselves with different types of 3D auditory feedback in the same VE; 2) if users could recognize the distance and direction of a virtual sound source in the virtual environment (VE) effectively; 3) if users could recognize the positions and distinguish the moving directions of 3D sound sources in the VE between the participants with and without VI. The results showed some possibilities of designing effective Human-Computer Interaction methods and some understandings of how the participants with VI experienced the scenarios differently than the participants without VI. Therefore, this research contributed new knowledge on how a visually impaired person interacts with computer interfaces, which can be used to derive guidelines for the design of effective VEs for rehabilitation and exercise

    Architectural visualisation toolkit for 3D Studio Max users

    Get PDF
    Architectural Visualisation has become a vital part of the design process for architects and engineers. The process of modelling and rendering an architectural visualisation can be complex and time consuming with only a few tools available to assist novice modellers. This paper looks at available solutions for visualisation specialists including AutoCAD, 3D Studio Max and Google SketchUp as well as available solutions which attempt to automate the process including Batzal Roof Designer. This thesis details a new program which has been developed to automate the modelling and rendering of the architectural visualisation process. The tool created for this thesis is written in MAXScript and runs along side 3D Studio Max. N.B.: Audio files were attached to this thesis at the time of its submission. Please refer to the author for further details

    Evaluation of a Low-Cost 3D Sound System for Immersive Virtual Reality Training Systems

    No full text

    Analyse de la performance des utilisateurs avec et sans défis visuels dans la localisation de sons rendus par une acoustique immersive de bas de gamme

    Get PDF
    Cette thèse porte sur la localisation de la position virtuelle de sources sonores utilisant des moyens de reproduction acoustique du type immersif de faible coût. L’étude compare la performance d’utilisateurs avec et sans défis visuels. Des études de multiples sources montrent qu’un pourcentage important des personnes ayant des défis visuels vivent avec des revenus annuels très bas. Ce constat a orienté ce projet de recherche, afin que les technologies acoustiques utilisées dans les essais soient des technologies de reproduction acoustique de faible coût. Ceci, pour couvrir le contexte acoustique le plus probable chez les utilisateurs de cette technologie. La création de plusieurs jeux acoustiques immersifs dans le contexte académique a exposé le besoin d’un modèle de perception capable d’orienter les concepteurs de scénarios acoustiques par rapport aux échantillons sonores à utiliser (la fréquence du son et leur positionnement). Un modèle de convivialité pour l’utilisation d’échantillons sonores dans un contexte acoustique immersif de bas coût est défini comme résultat des constats expérimentaux. La littérature expose l’existence d’une capacité accrue pour déterminer la localisation d’objets et d’interlocuteurs par des personnes non voyantes. L’origine de cet accroissement de capacité semble être relié à une utilisation plus efficace de l’information auditive ou à une spécialisation d’une partie du cortex visuel pour traiter l’information sonore. La présente expérience suggère que, dans un contexte acoustique immersif de faible coût, les personnes avec et sans défis visuels ont une performance similaire dans la localisation de la position virtuelle des sons produits. Les conclusions sont établies à partir du résultat de l’analyse comparative de métriques objectives pour 55 participants (19 ayant des défis visuels et 36 sans défis visuels) réalisant une activité de jeu acoustique sans interface visuelle. L’analyse des actions de localisation des participants dans 10500 évènements sonores, donne des résultats conformes aux attentes et est à la base des constats et du modèle de convivialité. Des stratégies de localisation de la position virtuelle du son ont été analysées et les conclusions sont présentées. Un modèle de convivialité acoustique appliqué au contexte est proposé. Celui-ci est un graphique d’interprétation simple, destiné aux concepteurs d’acoustique pour environnements immersifs

    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,TVCG-0018-0306.R1 1 Evaluation of a Low-Cost 3D Sound System for Immersive Virtual Reality Training Systems

    No full text
    Tracking systems and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based ’Virtual Training Systems ’ becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed and used signal types for localizing simulated sound events in a virtual training environment
    corecore