1,311 research outputs found

    A New 3D Tool for Planning Plastic Surgery

    Get PDF
    Face plastic surgery (PS) plays a major role in today medicine. Both for reconstructive and cosmetic surgery, achieving harmony of facial features is an important, if not the major goal. Several systems have been proposed for presenting to patient and surgeon possible outcomes of the surgical procedure. In this paper, we present a new 3D system able to automatically suggest, for selected facial features as nose, chin, etc, shapes that aesthetically match the patient's face. The basic idea is suggesting shape changes aimed to approach similar but more harmonious faces. To this goal, our system compares the 3D scan of the patient with a database of scans of harmonious faces, excluding the feature to be corrected. Then, the corresponding features of the k most similar harmonious faces, as well as their average, are suitably pasted onto the patient's face, producing k+1 aesthetically effective surgery simulations. The system has been fully implemented and tested. To demonstrate the system, a 3D database of harmonious faces has been collected and a number of PS treatments have been simulated. The ratings of the outcomes of the simulations, provided by panels of human judges, show that the system and the underlying idea are effectiv

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Planning Plastic Surgery in 3D. An innovative approach and tool

    Get PDF
    Face plastic surgery (PS) plays a major role in today medicine. Both for reconstructive and cosmetic surgery, achieving harmony of facial features is an important, if not the major goal. Several systems have been proposed for presenting to patient and surgeon possible outcomes of the surgical procedure. In this work, we present a new 3D system able to automatically suggest, for selected facial features as nose, chin, etc., shapes that aesthetically match the patient’s face. The basic idea is suggesting shape changes aimed to approach similar but more harmonious faces. To this goal, our system compares the 3D scan of the patient with a database of scans of harmonious faces, excluding the feature to be corrected. Then, the corresponding features of the k most similar harmonious faces, as well as their average, are suitably pasted onto the patient’s face, producing k+1 aesthetically effective surgery simulations. The system has been fully implemented and tested. To demonstrate the system, a 3D database of harmonious faces has been collected and a number of PS treatments have been simulated. The ratings of the outcomes of the simulations, provided by panels of human judges, show that the system and the underlying idea are effective

    3D Soft Tissue Laser Scanning Technology to Evaluate Soft Tissue Changes after Orthognatic Surgery

    Get PDF
    INTRODUCTION: Different technologies can be used to evaluate soft tissue changes after orthognathic surgery. Each technology comes with its own limitations, advantages, and costs. We compared 2D and 3D Soft tissue evaluation techniques. AIM: The purpose of this study is to evaluate the limitations, advantages and cost factor of two different techniques for evaluation of soft tissue changes after orthognathic surgery. METHODS: Pre surgical lateral cephalogram and laser soft tissue scan taken. Postsurgical cephalogram and laser scan taken 6 months after orthognathic surgery. Soft issue evaluation was done using Arnett and Bergman analysis. Both presurgical and postsurgical values compared to asses soft tissue changes. 2D and 3D soft tissue evaluation techniques evaluated to asses advantages and disadvantage of each technique. RESULT: Laser soft tissue scanner is an effective, more accurate, and convenient tool for soft tissue change evaluation. CONCLUSION: Advanced 3D laser scanner gives exact 3D information of a 3D object. Technique is easy, and needs less processing time. All measuring tools are incorporated in data reading software. Measurements show accuracy in micron level. Lateral cephalogram is cost effective, but it represents 2D aspect of a 3D object. Examiner level error in measurement is common in 2D soft tissue evaluation technique

    3D Face Reconstruction from 2D Pictures: First Results of a Web-Based Computer Aided System for Aesthetic Procedures

    Get PDF
    The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient's wishes and to achieve the desired results. To date, most plastic surgeons rely on either "free hand” 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient's face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2mm error) in less than 5mi

    Latent Disentanglement for the Analysis and Generation of Digital Human Shapes

    Get PDF
    Analysing and generating digital human shapes is crucial for a wide variety of applications ranging from movie production to healthcare. The most common approaches for the analysis and generation of digital human shapes involve the creation of statistical shape models. At the heart of these techniques is the definition of a mapping between shapes and a low-dimensional representation. However, making these representations interpretable is still an open challenge. This thesis explores latent disentanglement as a powerful technique to make the latent space of geometric deep learning based statistical shape models more structured and interpretable. In particular, it introduces two novel techniques to disentangle the latent representation of variational autoencoders and generative adversarial networks with respect to the local shape attributes characterising the identity of the generated body and head meshes. This work was inspired by a shape completion framework that was proposed as a viable alternative to intraoperative registration in minimally invasive surgery of the liver. In addition, one of these methods for latent disentanglement was also applied to plastic surgery, where it was shown to improve the diagnosis of craniofacial syndromes and aid surgical planning

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    Fully automated landmarking and facial segmentation on 3D photographs

    Full text link
    Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training and test dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.Comment: 13 pages, 4 figures, 7 tables, repository https://github.com/rumc3dlab/3dlandmarkdetection
    corecore