95,697 research outputs found

    The Effects of Virtual Reality on Motor Performance in the First Person Point of View

    Get PDF
    Previous research has shown that visualization is an effective method used to improve motor performance (Ridderinkhof, 2015) and that similar neural pathways are activated while visualizing and performing a task (Decety, 1989). More recent research has begun to look at whether virtual reality similarly improves motor performance (Bideau, 2004). The advantages of virtual reality include the ability to practice without physical exertion (Ridderinkhof, 2015) and a better cognitive understanding of complex tactics (Science-based cognitive assessment & training, 2019). In the current study, the effects of virtual reality and visualization on motor performance in sports is tested based on the success rate of being able to make free-throws between the Control, Visualization or Virtual Reality groups. I hypothesized that the Virtual Reality group will make more shots than the Visualization or the Control groups because of its more interactive ability. I also hypothesized that participants\u27 self-efficacy will increase after using virtual reality. The results of my research showed that there was no significant difference in shooting ability between groups. On the other hand, the participants in the Virtual Reality group found virtual reality to be significantly more useful than the Control group found counting backwards to be. Virtual reality was not significantly on any of the other self-efficacy questions. Future research should continue to examine the possible effects that virtual reality can have on motor performance as well as self-efficacy improvement

    NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization

    Get PDF
    The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    On the use of virtual reality in software visualization: The case of the city metaphor

    Get PDF
    Background: Researchers have been exploring 3D representations for visualizing software. Among these representations, one of the most popular is the city metaphor, which represents a target object-oriented system as a virtual city. Recently, this metaphor has been also implemented in interactive software visualization tools that use virtual reality in an immersive 3D environment medium. Aims: We assessed the city metaphor displayed on a standard computer screen and in an immersive virtual reality with respect to the support provided in the comprehension of Java software systems. Method: We conducted a controlled experiment where we asked the participants to fulfill program comprehension tasks with the support of (i) an integrated development environment (Eclipse) with a plugin for gathering code metrics and identifying bad smells; and (ii) a visualization tool of the city metaphor displayed on a standard computer screen and in an immersive virtual reality. Results: The use of the city metaphor displayed on a standard computer screen and in an immersive virtual reality significantly improved the correctness of the solutions to program comprehension tasks with respect to Eclipse. Moreover, when carrying out these tasks, the participants using the city metaphor displayed in an immersive virtual reality were significantly faster than those visualizing with the city metaphor on a standard computer screen. Conclusions: Virtual reality is a viable means for software visualization

    Using Virtual Reality Technology in Oil and Gas Industry

    Get PDF
    This article introduces the research of virtual reality technologies used in the oil and gas industry. The industry is so vast that the technologies used there are radically different. Various aspects of oil and gas production were considered, such as geodata modeling, real-time production visualization technology. The problems and possible solutions for translating CAD models into virtual reality applications are indicated. Also, using virtual reality technology, can increase the speed of work and reduce the risk of errors, which is extremely important in the oil and gas industry. As well as the benefits of learning and using virtual reality to improve learning and understanding of production processes

    Mission planning and remote operated vehicle simulation in a virtual reality interface

    Get PDF
    Virtual reality simulations are finding applications in a wide range of disciplines such as surgical simulation, electronics training, and crime scene investigation. During the Mars Pathfinder Mission, in summer 1997, NASA scientists unveiled a new application of virtual reality for the visualization of a planetary surface. The success of this application led to a more concentrated effort for using virtual reality visualization tools during future missions. The thrust of this effort was to develop a new interface which would allow scientists to interactively plan experiments to be performed by the mission robots. This thesis covers two of the primary aspects of implementing this system. The first topic was to develop a kinematic model for one of NASA\u27s rovers for use in a virtual reality simulation. The second aspect of this thesis is the implementation of the tools required for the mission planning module, which are the interfaces that the scientists use to plan the experiments for the rover
    corecore