720 research outputs found

    Evaluating Semi-Natural Travel and Viewing Techniques in Virtual Reality

    Get PDF
    With seated virtual reality (VR), the use cases based on seating conditions need to be considered while designing the travel and viewing techniques. The most natural method in seated VR, for viewing interactions is the standard 360-degree rotation, for which a swivel chair that spins around the vertical axis is commonly used. However, the VR users will not have the affordances of a swivel chair or the physical space to turn around, all the time. This limits their VR usage based on the availability of certain physical setups. Moreover, for prolonged usage, users might prefer to have convenient viewing interactions by sitting on a couch, not rotating physically all the way around. Our research addresses these scenarios by studying new and existing semi-natural travel and viewing techniques that can be used when full 360-degree rotation is not feasible or is not preferred. Two new techniques, guided head rotation and user-controlled resetting were developed and were compared with existing techniques in three controlled experiments. Standard 360- degree rotation and three joystick-control based viewing techniques (discrete rotation, continuous rotation and continuous rotation with reduced fov) were the existing techniques compared in our experiments. Since the new techniques and some of the existing techniques involve some rotation manipulations that are not natural, they could disorient the users during a virtual experience. So, two VR puzzle games were designed to study the effects of the techniques on spatial awareness of the users. Convenience, simulator sickness, comfort and preferences for home entertainment were the other factors investigated in the experiments. From the experiments, we found out that the results were based on 3D gaming experience of the participants. Participants who played 3D games one or more hours per week had higher tolerance towards the new techniques that had rotational manipulations compared to the participants who did not play any 3D game. Among the joystick rotation techniques, discrete rotation was rated the best by users in terms of simulator sickness. In addition to these experiments, we also present a case study that demonstrates the application of guided head rotation in an experiment that studied natural hand interaction with virtual objects under constrained physical conditions

    Automatic Speed Control For Navigation in 3D Virtual Environment

    Get PDF
    As technology progresses, the scale and complexity of 3D virtual environments can also increase proportionally. This leads to multiscale virtual environments, which are environments that contain groups of objects with extremely unequal levels of scale. Ideally the user should be able to navigate such environments efficiently and robustly. Yet, most previous methods to automatically control the speed of navigation do not generalize well to environments with widely varying scales. I present an improved method to automatically control the navigation speed of the user in 3D virtual environments. The main benefit of my approach is that automatically adapts the navigation speed in multi-scale environments in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability tests show a significant reduction in the completion time for a multi-scale navigation task

    Designing virtual environments for brain injury rehabilitation

    Get PDF
    Virtual Reality (VR) has shown great potential in various training applications. In the field of cognitive rehabilitation it has been shown that VR technology can become a useful complement to conventional rehabilitation techniques (e.g. Rizzo et al. (2002), Brown et al. (2002) and Kizony et al. (2002)). An important part of a brain injury patient’s rehabilitation process is practicing instrumental activities of daily living (IADL), such as preparing meals, cleaning, shopping and using a telephone. A pilot study by Lindén et al. (2000) came to the conclusion that activities like these can be practiced using desktop VR. The question addressed in this thesis is how a Virtual Environment (VE) should be designed to be a usable tool in brain injury rehabilitation. The thesis consists of three papers that describe three different studies that have been performed in order to further explore this area of research. Paper I describes the design of a practical VE application in the shape of a cash dispenser. A paper prototype was constructed which was first used to generate ideas from three occupational therapists. The prototype was then tested on six people with little or moderate computer knowledge and no experience of 3D computer simulations. The results from the evaluation were then used to implement a computer prototype with the VR development tool World Up. The computer prototype had automatic navigation, which meant that the position and orientation of the viewpoint, the user’s view into the VE, was controlled by the computer. The point-and-click method, which allows the user to move and manipulate objects with single mouse clicks, was used for interaction with objects. The computer prototype was then tested on five brain injury patients. The results of this evaluation are not included in paper I but are described in the thesis summary. Overall, all five subjects learned to handle the computer prototype sufficiently well. However, the interaction with objects posed some problems for them. For example, they initially tried to move the bankcard with drag-and-drop instead of point-and-click. Three subjects also pointed out that some parts of the VE, for example the display and the keypad, were unclear. All five subjects showed a positive attitude to the virtual cash dispenser The aim of paper II was to find a usable navigation input device for people with no experience of 3D computer graphics. After an initial discussion about various input devices it was decided that a Microsoft Sidewinder joystick and an IntelliKeys keyboard, both programmed with two and three degrees of freedom (DOF), should be compared in an experiment. Sixty able-bodied people with no experience of 3D computer graphics were divided into four groups. Each group was to perform a navigation task in a VE consisting of a kitchen and a corridor using one of the four input devices. The navigation task was designed to evaluate both fine adjustments of the viewpoint (maneuvering task) and transportation of the viewpoint from one location to another (search task). Each subject performed the task five times in a row and then answered a questionnaire consisting of five questions. Data logging and video recording were used to collect data. The study showed that both keyboard and joystick have their advantages and disadvantages. The keyboard seemed to be easier to control than the joystick for the maneuvering task. The keyboard was slightly easier to control also for the search task but was much slower than the joystick, which might make it an inconvenient input device for VEs that only involve search navigation. No significant difference could be found between two and three DOFs for the maneuvering task, but the 3rd DOF (sideways movement) seemed to facilitate the subjects’ navigation in some situations. Two DOFs was found to be slightly easier to control than three DOFs for the search task. The study described in paper III aimed at 1) evaluating a method for interaction with objects in VEs on people with no 3D computer graphics experience, and 2) finding a sufficiently usable input device for this purpose. After an initial discussion of possible methods for interaction with objects and various input devices, an experiment was conducted with 20 able-bodied people with no experience of 3D computer graphics. Our experiences of point-and-click from paper I and the pilot study (Lindén et al., 2000) made us think that maybe people have a more inherent understanding for drag-and-drop. Also, we had discussed using a virtual hand for carrying objects to simplify object movement. We therefore wanted to evaluate the following method for interaction with objects: 1) A virtual hand was used for carrying objects, 2) drag-and-drop was used for moving objects, 3) a single click was used for activating objects, and 4) objects were given a proper orientation automatically. Ten subjects used a regular desktop mouse and the other ten a touch screen to perform four interaction tasks in a kitchen VE five times in a row. Video recording was used to document the trial and the interview that was conducted afterwards. Broadly, the method for interaction with objects worked well. The majority of the subjects used the virtual hand for carrying objects. However, the fact that some subjects needed information before they started to use it indicates that its visibility and affordance needs to be improved. Opening and closing cupboard doors caused some problems, especially for the subjects in the touch screen group who tried to open them with drag-and-drop in a manner that resembled reality. No large difference in performance, except from the problem with the cupboard doors, could be seen between the mouse group and the touch screen group. The three studies described in this thesis is a step closer towards understanding how a VE should be designed in order to be a usable tool for people with brain injury. In particular, knowledge on how to make it as easy as possible for the user to navigate the viewpoint and interact with objects has been achieved. The work has also provided a deeper understanding on what effects the choice of input device has on the usability of a VE

    Ankle-Actuated Human-Machine Interface for Walking in Virtual Reality

    Get PDF
    This thesis work presents design, implementation and experimental study of an impedance type ankle haptic interface for providing users with the immersive navigation experience in virtual reality (VR). The ankle platform enables the use of foot-tapping gestures to reproduce realistic walking experience in VR and to haptically render different types of walking terrains. The system is designed to be used by seated users allowing more comfort, causing less fatigue and motion sickness. The custom-designed ankle interface is composed of a single actuator-sensors system making it a cost-efficient solution for VR applications. The designed interface consists of a single degree of freedom actuated platform which can rotate around the ankle joint of the user. The platform is impedance controlled around the horizontal position by an electric motor and capstan transmission system. to perform walking in a virtual scene, a seated user is expected to perform walking gestures in form of ankle plantar-flexion and dorsiflexion movements causing the platform to tilt forward and backward. We present three algorithms for simulating the immersive locomotion of a VR avatar using the platform movement information. We also designed multiple impedance controllers to render haptic feedback for different virtual terrains during walking. We carried out experiments to understand how quickly users adapt to the interface, how well they can control their locomotion speed in VR, and how well they can distinguish different types of terrains presented through haptic feedback. We implemented qualitative questionnaires on the usability of the device and the task load of the experimental procedures. The experimental studies demonstrated that the interface can be easily used to navigate in VR and it is capable of rendering dynamic multi-layer complex terrains containing structures with different stiffness and brittleness properties

    Convex Interaction : VR o mochiita kōdō asshuku ni yoru kūkanteki intarakushon no kakuchō

    Get PDF

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    Visuohaptic Simulation of a Borescope for Aircraft Engine Inspection

    Get PDF
    Consisting of a long, fiber optic probe containing a small CCD camera controlled by hand-held articulation interface, a video borescope is used for remote visual inspection of hard to reach components in an aircraft. The knowledge and psychomotor skills, specifically the hand-eye coordination, required for effective inspection are hard to acquire through limited exposure to the borescope in aviation maintenance schools. Inexperienced aircraft maintenance technicians gain proficiency through repeated hands-on learning in the workplace along a steep learning curve while transitioning from the classroom to the workforce. Using an iterative process combined with focused user evaluations, this dissertation details the design, implementation and evaluation of a novel visuohaptic simulator for training novice aircraft maintenance technicians in the task of engine inspection using a borescope. First, we describe the development of the visual components of the simulator, along with the acquisition and modeling of a representative model of a {PT-6} aircraft engine. Subjective assessments with both expert and novice aircraft maintenance engineers evaluated the visual realism and the control interfaces of the simulator. In addition to visual feedback, probe contact feedback is provided through a specially designed custom haptic interface that simulates tip contact forces as the virtual probe intersects with the {3D} model surfaces of the engine. Compared to other haptic interfaces, the custom design is unique in that it is inexpensive and uses a real borescope probe to simulate camera insertion and withdrawal. User evaluation of this simulator with probe tip feedback suggested a trend of improved performance with haptic feedback. Next, we describe the development of a physically-based camera model for improved behavioral realism of the simulator. Unlike a point-based camera, the enhanced camera model simulates the interaction of the borescope probe, including multiple points of contact along the length of the probe. We present visual comparisons of a real probe\u27s motion with the simulated probe model and develop a simple algorithm for computing the resultant contact forces. User evaluation comparing our custom haptic device with two commonly available haptic devices, the Phantom Omni and the Novint Falcon, suggests that the improved camera model as well as probe contact feedback with the 3D engine model plays a significant role in the overall engine inspection process. Finally, we present results from a skill transfer study comparing classroom-only instruction with both simulator and hands-on training. Students trained using the simulator and the video borescope completed engine inspection using the real video borescope significantly faster than students who received classroom-only training. The speed improvements can be attributed to reduced borescope probe maneuvering time within the engine and improved psychomotor skills due to training. Given the usual constraints of limited time and resources, simulator training may provide beneficial skills needed by novice aircraft maintenance technicians to augment classroom instruction, resulting in a faster transition into the aviation maintenance workforce

    A framework for tumor segmentation and interactive immersive visualization of medical image data for surgical planning

    Get PDF
    This dissertation presents the framework for analyzing and visualizing digital medical images. Two new segmentation methods have been developed: a probability based segmentation algorithm, and a segmentation algorithm that uses a fuzzy rule based system to generate similarity values for segmentation. A visualization software application has also been developed to effectively view and manipulate digital medical images on a desktop computer as well as in an immersive environment.;For the probabilistic segmentation algorithm, image data are first enhanced by manually setting the appropriate window center and width, and if needed a sharpening or noise removal filter is applied. To initialize the segmentation process, a user places a seed point within the object of interest and defines a search region for segmentation. Based on the pixels\u27 spatial and intensity properties, a probabilistic selection criterion is used to extract pixels with a high probability of belonging to the object. To facilitate the segmentation of multiple slices, an automatic seed selection algorithm was developed to keep the seeds in the object as its shape and/or location changes between consecutive slices.;The second segmentation method, a new segmentation method using a fuzzy rule based system to segment tumors in a three-dimensional CT data was also developed. To initialize the segmentation process, the user selects a region of interest (ROI) within the tumor in the first image of the CT study set. Using the ROI\u27s spatial and intensity properties, fuzzy inputs are generated for use in the fuzzy rules inference system. Using a set of predefined fuzzy rules, the system generates a defuzzified output for every pixel in terms of similarity to the object. Pixels with the highest similarity values are selected as tumor. This process is automatically repeated for every subsequent slice in the CT set without further user input, as the segmented region from the previous slice is used as the ROI for the current slice. This creates a propagation of information from the previous slices, used to segment the current slice. The membership functions used during the fuzzification and defuzzification processes are adaptive to the changes in the size and pixel intensities of the current ROI. The proposed method is highly customizable to suit different needs of a user, requiring information from only a single two-dimensional image.;Segmentation results from both algorithms showed success in segmenting the tumor from seven of the ten CT datasets with less than 10% false positive errors and five test cases with less than 10% false negative errors. The consistency of the segmentation results statistics also showed a high repeatability factor, with low values of inter- and intra-user variability for both methods.;The visualization software developed is designed to load and display any DICOM/PACS compatible three-dimensional image data for visualization and interaction in an immersive virtual environment. The software uses the open-source libraries DCMTK: DICOM Toolkit for parsing of digital medical images, Coin3D and SimVoleon for scenegraph management and volume rendering, and VRJuggler for virtual reality display and interaction. A user can apply pseudo-coloring in real time with multiple interactive clipping planes to slice into the volume for an interior view. A windowing feature controls the tissue density ranges to display. A wireless gamepad controller as well as a simple and intuitive menu interface control user interactions. The software is highly scalable as it can be used on a single desktop computer to a cluster of computers for an immersive multi-projection virtual environment. By wearing a pair of stereo goggles, the surgeon is immersed within the model itself, thus providing a sense of realism as if the surgeon is inside the patient.;The tools developed in this framework are designed to improve patient care by fostering the widespread use of advanced visualization and computational intelligence in preoperative planning, surgical training, and diagnostic assistance. Future work includes further improvements to both segmentation methods with plans to incorporate the use of deformable models and level set techniques to include tumor shape features as part of the segmentation criteria. For the surgical planning components, additional controls and interactions with the simulated endoscopic camera and the ability to segment the colon or a selected region of the airway for a fixed-path navigation as a full virtual endoscopy tool will also be implemented. (Abstract shortened by UMI.
    corecore