70,577 research outputs found

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    NextMed, Augmented and Virtual Reality platform for 3D medical imaging visualization

    Get PDF
    The visualization of the radiological results with more advanced techniques than the current ones, such as Augmented Reality and Virtual Reality technologies, represent a great advance for medical professionals, by eliminating their imagination capacity as an indispensable requirement for the understanding of medical images. The problem is that for its application it is necessary to segment the anatomical areas of interest, and this currently involves the intervention of the human being. The Nextmed project is presented as a complete solution that includes DICOM images import, automatic segmentation of certain anatomical structures, 3D mesh generation of the segmented area, visualization engine with Augmented Reality and Virtual Reality, all thanks to different software platforms that have been implemented and detailed, including results obtained from real patients. We will focus on the visualization platform using both Augmented and Virtual Reality technologies to allow medical professionals to work with 3d model representation of medical images in a different way taking advantage of new technologies

    The Medium of Visualization for Software Comprehension

    Get PDF
    Although abundant studies have shown how visualization can help software developers to understand software systems, visualization is still not a common practice since developers (i) have little support to find a proper visualization for their needs, and once they find a suitable visualization tool, they (ii) are unsure of its effectiveness. We aim to offer support for identifying proper visualizations, and to increase the effectiveness of visualization techniques. In this dissertation, we characterize proposed software visualizations. To fill the gap between proposed visualizations and their practical application, we encapsulate such characteristics in an ontology, and propose a meta-visualization approach to find suitable visualizations. Amongst others characteristics of software visualizations, we identify that the medium used to display them can be a means to increase the effectiveness of visualization techniques for particular comprehension tasks.We implement visualization prototypes and validate our thesis via experiments. We found that even though developers using a physical 3D model medium required the least time to deal with tasks that involve identifying outliers, they perceived the least difficulty when visualizing systems based on the standard computer screen medium. Moreover, developers using immersive virtual reality obtained the highest recollection. We conclude that the effectiveness of software visualizations that use the city metaphor to support comprehension tasks can be increased when city visualizations are rendered in an appropriate medium. Furthermore, that visualization of software visualizations can be a suitable means for exploring their multiple characteristics that can be properly encapsulated in an ontology

    Internship in Augmented and Virtual Reality - Rapid Model Import Tool

    Get PDF
    The integration of virtual and augmented reality, sometimes called mixed reality, is an emerging technology which will likely skyrocket overnight much in the way smartphones did a decade ago. Kennedy Space Center's Augmented and Virtual Reality (AVR) Lab is developing a Rapid Model Import Tool (RMIT) to create a quick and efficient way to bring NASA's complex engineering 3D models into virtual and augmented environments. The long-term objective is to create a tool that will ultimately benefit KSC engineers. Its various uses within NASA can potentially span from astronaut training, to marketing, to public outreach, to name a few. Unity is a prolific cross-platform game engine that allows users to build high quality 2D and 3D games for desktop, mobile, web, and game console platforms. It is perhaps also the most widely used software for virtual reality game development. At the AVR lab, we are looking at alternative uses of Unity to build tools for NASA engineers to perform design, development, testing, and training on spacecraft, rocket delivery systems, ground support equipment, and facilities at KSC. As an intern for the RMIT project, I am charged with the task of performing research on Unity-compatible file types to develop an efficient, affordable, preservative process to bring models from CATIA 3D engineering software into the Unity environment. With a tool called the NASA Enterprise Visualization Application (NEVA), developed by the Boeing Design Visualization group at KSC, we are able to easily convert CATIA's design models to. DAE (also known as COLLADA) and .OBJ file formats. I first reduce the polygon count of the model within CATIA itself, make any necessary tweaks to reduce the model further, and then export using NEVA. The .OBJ or. DAE files that I am left with are then converted by another intern to a Unity-compatible file format using a custom Python script. I have generated extensive documentation of this process in a NEVA User Guide. By the end of this semester, we will have built a solid framework for RMIT based on a thorough understanding of virtual reality specifications and file requirements, allowing future software development teams to go forward with development on the custom tool

    Mobile augmented reality: A pedagogical strategy in the university setting

    Get PDF
    The mobile augmented reality (M-AR) besides being a booming computer technology is an innovative tool that can support the pedagogical process in university classrooms, that is why the present research aims to show a methodological proposal for its implementation, in order to facilitate the learning of spatial reasoning of students, through the visualization and manipulation of three-dimensional virtual objects, promoting the motivation of learning knowledge and topics of the course of industrial design and technical drawing for the career of industrial engineering. A collection of geometric figures has been elaborated with the help of technological tools such as 2D and 3D modeling software, computer-aided design software and augmented reality application software. An updated methodology is proposed, available to any teacher, oriented to the stimulation of mental processes related to spatial reasoning of students, which integrates technological tools in the didactics of the dihedral system and the different graphic projections

    Augmented Reality and Gesture-Based Control

    Get PDF
    This research investigates methods for interacting with 3D visualizations of science data. Even with higher resolution, large format, and stereoscopic displays, most visualization still involves the user looking at the result rendered on a flat panel. Changing perspective, zooming, and interpreting depth is often disorienting and frustrating. Specialized hardware and software solutions like large format displays and CAVEs address these issues with infrastructure limited by cost, complexity, and size. We investigate low cost commercial hardware solutions for their potential application to this problem. The Leap Motion Controller and Kinect Motion Sensor are assessed for gesture-based visualization control. The Oculus Rift is considered for immersive virtual reality combining head tracking and close-to-eye wide angle display. Finally, Android devices are used for augmented reality by overlaying rendered 3D objects on a camera video stream to react to a user’s perspective. These devices are integrated with the Unity 3D gaming engine as a tool for connecting input from the sensors to both the Oculus and flat panel displays. The visualizations use example models created from scientific data

    A Software Framework to Create 3D Browser-Based Speech Enabled Applications

    Get PDF
    The advances in automatic speech recognition have pushed the humancomputer interface researchers to adopt speech as one mean of input data. It is natural to humans, and complements very well other input interfaces. However, integrating an automatic speech recognizer into a complex system (such as a 3D visualization system or a Virtual Reality system) can be a difficult and time consuming task. In this paper we present our approach to the problem, a software framework requiringminimum additional coding from the application developer. The framework combines voice commands with existing interaction code, automating the task of creating a new speech grammar (to be used by the recognizer). A new listener component for theXj3D was created, which makes transparent to the user the integration between the 3D browser and the recognizer. We believe this is a desirable feature for virtual reality system developers, and also to be used as a rapid prototyping tool when experimenting with speech technology
    corecore