800 research outputs found

    The virtual human face – superimposing the simultaneously captured 3D photorealistic skin surface of the face on the untextured skin image of the CBCT Scan

    Get PDF
    The aim of this study was to evaluate the impact of simultaneous capture of the three-dimensional (3D) surface of the face and cone beam computed tomography (CBCT) scan of the skull on the accuracy of their registration and superimposition. 3D facial images were acquired in 14 patients using the Di3d (Dimensional Imaging, UK) imaging system and i-CAT CBCT scanner. One stereophotogrammetry image was captured at the same time as the CBCT and another one hour later. The two stereophotographs were then individually superimposed over the CBCT using VRmesh. Seven patches were isolated on the final merged surfaces. For the whole face and each individual patch; maximum and minimum range of deviation between surfaces, absolute average distance between surfaces, and standard deviation for the 90th percentile of the distance errors were calculated. The superimposition errors of the whole face for both captures revealed statistically significant differences (P=0.00081). The absolute average distances in both separate and simultaneous captures were 0.47mm and 0.27mm, respectively. The level of superimposition accuracy in patches from separate captures ranged between 0.3 and 0.9mm, while that of simultaneous captures was 0.4mm. Simultaneous capture of Di3d and CBCT images significantly improved the accuracy of superimposition of these image modalities

    Sketchy rendering for information visualization

    Get PDF
    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visual- ization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users’ ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization de- sign. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty

    The paranasal sinuses: three-dimensional reconstruction, photo-realistic imaging, and virtual endoscopy

    Get PDF
      Background: The purpose of the study was to create computer-aided design models of the paranasal sinuses (frontal, maxillary, and sphenoid) and to perform virtual endoscopy (VE) to them by using virtual reality modelling language technique. Materials and methods: The visible human dataset was used as the input imaging data. The Surfdriver software package was applied on these images to reconstruct the paranasal sinuses as 3-dimensional (3D) computer-aided design models. These models were post-processed in Cinema 4D to perform the photorealistic imaging and VE of the paranasal sinuses. Results: The volumes of the maxillary sinuses were 24747.89 mm3 on the right and 29008.78 mm3 on the left. As for sphenoidal sinuses, an enormous variation was seen between the right and left cavities. The sphenoidal sinuses were 1995.90 mm3 on the right and 10228.93 mm3 on the left while the frontal sinuses were 20805.67 mm3 on the right and 18048.85 mm3 on the left. The largest sinus was left maxillary sinus by volume. Right frontal sinus was the largest sinus by surface area. It was calculated as 6002.73 mm2. Our methodological outcomes proved that Surfdriver and Cinema 4D pair could be reliably used for 3D reconstructions, photo realistic imaging and creating 3D virtual environments from the serial sections of the anatomical structures. Conclusions: This technique allows students, researchers, and surgeons to perform noninvasive visualisation, simulation, and precise quantitative measurements of internal structures of the body. It was developed as a complementary tool for endoscopic surgery. It could be especially preferable for the patients who could not tolerate flexible or rigid endoscopy

    Visual SLAM for Measurement and Augmented Reality in Laparoscopic Surgery

    Get PDF
    In spite of the great advances in laparoscopic surgery, this type of surgery still shows some difficulties during its realization, mainly caused by its complex maneuvers and, above all, by the loss of the depth perception. Unlike classical open surgery --laparotomy-- where surgeons have direct contact with organs and a complete 3D perception, laparoscopy is carried out by means of specialized instruments, and a monocular camera (laparoscope) in which the 3D scene is projected into a 2D plane --image. The main goal of this thesis is to face with this loss of depth perception by making use of Simultaneous Localization and Mapping (SLAM) algorithms developed in the fields of robotics and computer vision during the last years. These algorithms allow to localize, in real time (25 ∼\thicksim 30 frames per second), a camera that moves freely inside an unknown rigid environment while, at the same time, they build a map of this environment by exploiting images gathered by that camera. These algorithms have been extensively validated both in man-made environments (buildings, rooms, ...) and in outdoor environments, showing robustness to occlusions, sudden camera motions, or clutter. This thesis tries to extend the use of these algorithms to laparoscopic surgery. Due to the intrinsic nature of internal body images (they suffer from deformations, specularities, variable illumination conditions, limited movements, ...), applying this type of algorithms to laparoscopy supposes a real challenge. Knowing the camera (laparoscope) location with respect to the scene (abdominal cavity) and the 3D map of that scene opens new interesting possibilities inside the surgical field. This knowledge enables to do augmented reality annotations directly on the laparoscopic images (e.g. alignment of preoperative 3D CT models); intracavity 3D distance measurements; or photorealistic 3D reconstructions of the abdominal cavity recovering synthetically the lost depth. These new facilities provide security and rapidity to surgical procedures without disturbing the classical procedure workflow. Hence, these tools are available inside the surgeon's armory, being the surgeon who decides to use them or not. Additionally, knowledge of the camera location with respect to the patient's abdominal cavity is fundamental for future development of robots that can operate automatically since, knowing this location, the robot will be able to localize other tools controlled by itself with respect to the patient. In detail, the contributions of this thesis are: - To demonstrate the feasibility of applying SLAM algorithms to laparoscopy showing experimentally that using robust data association is a must. - To robustify one of these algorithms, in particular the monocular EKF-SLAM algorithm, by adapting a relocalization system and improving data association with a robust matching algorithm. - To develop of a robust matching method (1-Point RANSAC algorithm). - To develop a new surgical procedure to ease the use of visual SLAM in laparoscopy. - To make an extensive validation of the robust EKF-SLAM (EKF + relocalization + 1-Point RANSAC) obtaining millimetric errors and working in real time both on simulation and real human surgeries. The selected surgery has been the ventral hernia repair. - To demonstrate the potential of these algorithms in laparoscopy: they recover synthetically the depth of the operative field which is lost by using monocular laparoscopes, enable the insertion of augmented reality annotations, and allow to perform distance measurements using only a laparoscopic tool (to define the real scale) and laparoscopic images. - To make a clinical validation showing that these algorithms allow to shorten surgical times of operations and provide more security to the surgical procedures

    MBAPose: Mask and Bounding-Box Aware Pose Estimation of Surgical Instruments with Photorealistic Domain Randomization

    Full text link
    Surgical robots are controlled using a priori models based on robots' geometric parameters, which are calibrated before the surgical procedure. One of the challenges in using robots in real surgical settings is that parameters change over time, consequently deteriorating control accuracy. In this context, our group has been investigating online calibration strategies without added sensors. In one step toward that goal, we have developed an algorithm to estimate the pose of the instruments' shafts in endoscopic images. In this study, we build upon that earlier work and propose a new framework to more precisely estimate the pose of a rigid surgical instrument. Our strategy is based on a novel pose estimation model called MBAPose and the use of synthetic training data. Our experiments demonstrated an improvement of 21 % for translation error and 26 % for orientation error on synthetic test data with respect to our previous work. Results with real test data provide a baseline for further research.Comment: 8 pages, submitted to IROS202

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    The application of virtual reality and augmented reality in oral & maxillofacial surgery

    Get PDF
    Background: Virtual reality is the science of creating a virtual environment for the assessment of various anatomical regions of the body for the diagnosis, planning and surgical training. Augmented reality is the superimposition of a 3D real environment specific to individual patient onto the surgical filed using semi-transparent glasses to augment the virtual scene.. The aim of this study is to provide an over view of the literature on the application of virtual and augmented reality in oral & maxillofacial surgery. Methods: We reviewed the literature and the existing database using Ovid MEDLINE search, Cochran Library and PubMed. All the studies in the English literature in the last 10 years, from 2009 to 2019 were included. Results: We identified 101 articles related the broad application of virtual reality in oral & maxillofacial surgery. These included the following: Eight systematic reviews, 4 expert reviews, 9 case reports, 5 retrospective surveys, 2 historical perspectives, 13 manuscripts on virtual education and training, 5 on haptic technology, 4 on augmented reality, 10 on image fusion, 41 articles on the prediction planning for orthognathic surgery and maxillofacial reconstruction. Dental implantology and orthognathic surgery are the most frequent applications of virtual reality and augmented reality. Virtual planning improved the accuracy of inserting dental implants using either a statistic guidance or dynamic navigation. In orthognathic surgery, prediction planning and intraoperative navigation are the main applications of virtual reality. Virtual reality has been utilised to improve the delivery of education and the quality of training in oral & maxillofacial surgery by creating a virtual environment of the surgical procedure. Haptic feedback provided an additional immersive reality to improve manual dexterity and improve clinical training. Conclusion: Virtual and augmented reality have contributed to the planning of maxillofacial procedures and surgery training. Few articles highlighted the importance of this technology in improving the quality of patients’ care. There are limited prospective randomized studies comparing the impact of virtual reality with the standard methods in delivering oral surgery education

    Emergency Response Virtual Environment for Safe Schools

    Get PDF
    An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution
    • …
    corecore