3,965 research outputs found

    Using a smart phone for information rendering in Computer-Aided Surgery

    Full text link
    Computer-aided surgery intensively uses the concept of navigation: after having collected CT data from a patient and transferred them to the operating room coordinate system, the surgical instrument (a puncture needle for instance) is localized and its position is visualized with respect to the patient organs which are not directly visible. This approach is very similar to the GPS paradigm. Traditionally, three orthogonal slices in the patient data are presented on a distant screen. Sometimes a 3D representation is also added. In this study we evaluated the potential of adding a smart phone as a man-machine interaction device. Different experiments involving operators puncturing a phantom are reported in this paper

    Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality

    Get PDF
    Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond

    Assessment of 3D Facial Scan Integration in 3D Digital Workflow Using Radiographic Markers and Iterative Closest Point Algorithm

    Get PDF
    Introduction: Integration of 3 dimensional (3D) facial scanning into digital smile design workflows has been made available in multiple commercially available systems. Limited data exists on the accuracy of facial scans and accuracy of various methods of merging facial scans with cone beam computed tomography (CBCT) scans.Objective: The purpose of this prospective clinical study was to evaluate the accuracy of 2 methods used to integrate soft tissue facial scans with CBCT scans. It would allow proposal of a novel approach for integrating a 3D facial scan using facial radio-opaque markers in a 3D digital workflow.Material and methods: Fifteen CBCT and 3D face scans were obtained from patients who were undergoing treatment at MUSoD. A DICOM with RO markers and 3 STL data files from the facial scans were obtained for each patient. These files were superimposed using Exocad software. Accuracy of superimpositions was evaluated by measuring distances between RO markers on DICOM and STL data. The obtained dataset was analyzed using the paired t-test. Results: The results showed that the mean values for the 6 subsets, merging through the ICP algorithm, were 1.47-2mm. However, when merged by RO markers, the mean valuewas 0.14mm. Using a paired t-test, the novel RO points method was statistically more accurate than ICP algorithm method (

    CATRA: Interactive Measuring and Modeling of Cataracts

    Get PDF
    We introduce an interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly-trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast and sub-aperture point-spread functions. The goal is to allow a general audience to operate a portable high-contrast light-field display to gain a meaningful understanding of their own visual conditions. User evaluations and validation with modified camera optics are performed. Compiled data is used to reconstruct the individual's cataract-affected view, offering a novel approach for capturing information for screening, diagnostic, and clinical analysis.Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award

    Haptics and the Biometric Authentication Challenge

    Get PDF

    Management of a Complex Case during COVID-19 Time Using One-day Digital Dentistry: A Case Report

    Get PDF
    Aim and objective: The aim of the present case report is to describe the digital management of an implant prosthetic rehabilitation performed by the use of different digital technologies, which allowed to successfully perform in 1 day both the surgical and the prosthetical stages with a minimally invasive approach and a high standard of care. Background: Coronavirus disease-2019 (COVID-19) pandemic is affecting dental everyday practice. Clinicians have to reduce the number of patients per day and the time they spend in the dental office. Minimally invasive and digital approaches, with less possible exposure and interaction, are suggested to reduce the risk of infection. Case description: The failure of a short-span implant prosthetic rehabilitation combined with pain and mobility of the involved teeth was the main complaint reported by a 78-year-old male patient, who asked an urgent appointment to solve the problem. An intraoral scanner allowed the clinician to immediately take a preliminary digital impression of the arch to be treated. The resulting 3D files were sent by e-mail to the dental technician who provided a digital wax-up for the computerized workflow. Computer-aided implantology (CAI) performed using an in-office cone-beam computed tomography (CBCT) allowed clinician to guide the surgical approach in a prosthetic manner. Such an integration inside a well-defined workflow was the key for a successful and rapid treatment. Conclusion: By using new innovative digital technology, the treatment was completed in 1 day, reducing the risk of COVID-19 by limiting the number of appointments and reducing contacts in confined environments like the dental office and public transportations. It also helped to reduce materials production and people movement in the treatment of dental emergency. Clinical significance: The possibility of performing an effective treatment saving time by using efficient technology and a minimally invasive procedure highlights the importance of digital planning in order to optimize every single step of the treatment. Digital workflow reduces also the movement of potentially infected materials from the office to the dental laboratory

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image
    • …
    corecore