504 research outputs found

    Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve Implantation

    Get PDF
    Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed. The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed. Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets. The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI.Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt. Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt. Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt. Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet

    Interfaces for Modular Surgical Planning and Assistance Systems

    Get PDF
    Modern surgery of the 21st century relies in many aspects on computers or, in a wider sense, digital data processing. Department administration, OR scheduling, billing, and - with increasing pervasion - patient data management are performed with the aid of so called Surgical Information Systems (SIS) or, more general, Hospital Information Systems (HIS). Computer Assisted Surgery (CAS) summarizes techniques which assist a surgeon in the preparation and conduction of surgical interventions. Today still predominantly based on radiology images, these techniques include the preoperative determination of an optimal surgical strategy and intraoperative systems which aim at increasing the accuracy of surgical manipulations. CAS is a relatively young field of computer science. One of the unsolved "teething troubles" of CAS is the absence of technical standards for the interconnectivity of CAS system. Current CAS systems are usually "islands of information" with no connection to other devices within the operating room or hospital-wide information systems. Several workshop reports and individual publications point out that this situation leads to ergonomic, logistic, and economic limitations in hospital work. Perioperative processes are prolonged by the manual installation and configuration of an increasing amount of technical devices. Intraoperatively, a large amount of the surgeons'' attention is absorbed by the requirement to monitor and operate systems. The need for open infrastructures which enable the integration of CAS devices from different vendors in order to exchange information as well as commands among these devices through a network has been identified by numerous experts with backgrounds in medicine as well as engineering. This thesis contains two approaches to the integration of CAS systems: - For perioperative data exchange, the specification of new data structures as an amendment to the existing DICOM standard for radiology image management is presented. The extension of DICOM towards surgical application allows for the seamless integration of surgical planning and reporting systems into DICOM-based Picture Archiving and Communication Systems (PACS) as they are installed in most hospitals for the exchange and long-term archival of patient images and image-related patient data. - For the integration of intraoperatively used CAS devices, such as, e.g., navigation systems, video image sources, or biosensors, the concept of a surgical middleware is presented. A c++ class library, the TiCoLi, is presented which facilitates the configuration of ad-hoc networks among the modules of a distributed CAS system as well as the exchange of data streams, singular data objects, and commands between these modules. The TiCoLi is the first software library for a surgical field of application to implement all of these services. To demonstrate the suitability of the presented specifications and their implementation, two modular CAS applications are presented which utilize the proposed DICOM extensions for perioperative exchange of surgical planning data as well as the TiCoLi for establishing an intraoperative network of autonomous, yet not independent, CAS modules.Die moderne Hochleistungschirurgie des 21. Jahrhunderts ist auf vielerlei Weise abhängig von Computern oder, im weiteren Sinne, der digitalen Datenverarbeitung. Administrative Abläufe, wie die Erstellung von Nutzungsplänen für die verfügbaren technischen, räumlichen und personellen Ressourcen, die Rechnungsstellung und - in zunehmendem Maße - die Verwaltung und Archivierung von Patientendaten werden mit Hilfe von digitalen Informationssystemen rationell und effizient durchgeführt. Innerhalb der Krankenhausinformationssysteme (KIS, oder englisch HIS) stehen für die speziellen Bedürfnisse der einzelnen Fachabteilungen oft spezifische Informationssysteme zur Verfügung. Chirurgieinformationssysteme (CIS, oder englisch SIS) decken hierbei vor allen Dingen die Bereiche Operationsplanung sowie Materialwirtschaft für spezifisch chirurgische Verbrauchsmaterialien ab. Während die genannten HIS und SIS vornehmlich der Optimierung administrativer Aufgaben dienen, stehen die Systeme der Computerassistierten Chirugie (CAS) wesentlich direkter im Dienste der eigentlichen chirugischen Behandlungsplanung und Therapie. Die CAS verwendet Methoden der Robotik, digitalen Bild- und Signalverarbeitung, künstlichen Intelligenz, numerischen Simulation, um nur einige zu nennen, zur patientenspezifischen Behandlungsplanung und zur intraoperativen Unterstützung des OP-Teams, allen voran des Chirurgen. Vor allen Dingen Fortschritte in der räumlichen Verfolgung von Werkzeugen und Patienten ("Tracking"), die Verfügbarkeit dreidimensionaler radiologischer Aufnahmen (CT, MRT, ...) und der Einsatz verschiedener Robotersysteme haben in den vergangenen Jahrzehnten den Einzug des Computers in den Operationssaal - medienwirksam - ermöglicht. Weniger prominent, jedoch keinesfalls von untergeordnetem praktischen Nutzen, sind Beispiele zur automatisierten Überwachung klinischer Messwerte, wie etwa Blutdruck oder Sauerstoffsättigung. Im Gegensatz zu den meist hochgradig verteilten und gut miteinander verwobenen Informationssystemen für die Krankenhausadministration und Patientendatenverwaltung, sind die Systeme der CAS heutzutage meist wenig oder überhaupt nicht miteinander und mit Hintergrundsdatenspeichern vernetzt. Eine Reihe wissenschaftlicher Publikationen und interdisziplinärer Workshops hat sich in den vergangen ein bis zwei Jahrzehnten mit den Problemen des Alltagseinsatzes von CAS Systemen befasst. Mit steigender Intensität wurde hierbei auf den Mangel an infrastrukturiellen Grundlagen für die Vernetzung intraoperativ eingesetzter CAS Systeme miteinander und mit den perioperativ eingesetzten Planungs-, Dokumentations- und Archivierungssystemen hingewiesen. Die sich daraus ergebenden negativen Einflüsse auf die Effizienz perioperativer Abläufe - jedes Gerät muss manuell in Betrieb genommen und mit den spezifischen Daten des nächsten Patienten gefüttert werden - sowie die zunehmende Aufmerksamkeit, welche der Operateur und sein Team auf die Überwachung und dem Betrieb der einzelnen Geräte verwenden muss, werden als eine der "Kinderkrankheiten" dieser relativ jungen Technologie betrachtet und stehen einer Verbreitung über die Grenzen einer engagierten technophilen Nutzergruppe hinaus im Wege. Die vorliegende Arbeit zeigt zwei parallel von einander (jedoch, im Sinne der Schnittstellenkompatibilität, nicht gänzlich unabhängig voneinander) zu betreibende Ansätze zur Integration von CAS Systemen. - Für den perioperativen Datenaustausch wird die Spezifikation zusätzlicher Datenstrukturen zum Transfer chirurgischer Planungsdaten im Rahmen des in radiologischen Bildverarbeitungssystemen weit verbreiteten DICOM Standards vorgeschlagen und an zwei Beispielen vorgeführt. Die Erweiterung des DICOM Standards für den perioperativen Einsatz ermöglicht hierbei die nahtlose Integration chirurgischer Planungssysteme in existierende "Picture Archiving and Communication Systems" (PACS), welche in den meisten Fällen auf dem DICOM Standard basieren oder zumindest damit kompatibel sind. Dadurch ist einerseits der Tatsache Rechnung getragen, dass die patientenspezifische OP-Planung in hohem Masse auf radiologischen Bildern basiert und andererseits sicher gestellt, dass die Planungsergebnisse entsprechend der geltenden Bestimmungen langfristig archiviert und gegen unbefugten Zugriff geschützt sind - PACS Server liefern hier bereits wohlerprobte Lösungen. - Für die integration intraoperativer CAS Systeme, wie etwa Navigationssysteme, Videobildquellen oder Sensoren zur Überwachung der Vitalparameter, wird das Konzept einer "chirurgischen Middleware" vorgestellt. Unter dem Namen TiCoLi wurde eine c++ Klassenbibliothek entwickelt, auf deren Grundlage die Konfiguration von ad-hoc Netzwerken während der OP-Vorbereitung mittels plug-and-play Mechanismen erleichtert wird. Nach erfolgter Konfiguration ermöglicht die TiCoLi den Austausch kontinuierlicher Datenströme sowie einzelner Datenpakete und Kommandos zwischen den Modulen einer verteilten CAS Anwendung durch ein Ethernet-basiertes Netzwerk. Die TiCoLi ist die erste frei verfügbare Klassenbibliothek welche diese Funktionalitäten dediziert für einen Einsatz im chirurgischen Umfeld vereinigt. Zum Nachweis der Tauglichkeit der gezeigten Spezifikationen und deren Implementierungen, werden zwei modulare CAS Anwendungen präsentiert, welche die vorgeschlagenen DICOM Erweiterungen zum perioperativen Austausch von Planungsergebnissen sowie die TiCoLi zum intraoperativen Datenaustausch von Messdaten unter echzeitnahen Anforderungen verwenden

    SenseCare: A Research Platform for Medical Image Informatics and Interactive 3D Visualization

    Full text link
    Clinical research on smart healthcare has an increasing demand for intelligent and clinic-oriented medical image computing algorithms and platforms that support various applications. To this end, we have developed SenseCare research platform for smart healthcare, which is designed to boost translational research on intelligent diagnosis and treatment planning in various clinical scenarios. To facilitate clinical research with Artificial Intelligence (AI), SenseCare provides a range of AI toolkits for different tasks, including image segmentation, registration, lesion and landmark detection from various image modalities ranging from radiology to pathology. In addition, SenseCare is clinic-oriented and supports a wide range of clinical applications such as diagnosis and surgical planning for lung cancer, pelvic tumor, coronary artery disease, etc. SenseCare provides several appealing functions and features such as advanced 3D visualization, concurrent and efficient web-based access, fast data synchronization and high data security, multi-center deployment, support for collaborative research, etc. In this paper, we will present an overview of SenseCare as an efficient platform providing comprehensive toolkits and high extensibility for intelligent image analysis and clinical research in different application scenarios.Comment: 11 pages, 10 figure

    A 3D computed tomography based tool for orthopedic surgery planning

    Get PDF
    Series : Lecture notes in computational vision and biomechanics, vol. 19The preparation of a plan is essential for a surgery to take place in the best way possible and also for shortening patient’s recovery times. In the orthopedic case, planning has an accentuated significance due to the close relation between the degree of success of the surgery and the patient recovering time. It is important that surgeons are provided with tools that help them in the planning task, in order to make it more reliable and less time consuming. In this paper, we present a 3D Computed Tomography based solution and its implementation as an OsiriX plugin for orthopedic surgery planning. With the developed plugin, the surgeon is able to manipulate a three-dimensional isosurface rendered from the selected imaging study (a CT scan). It is possible to add digital representations of physical implants (surgical templates), in order to evaluate the feasibility of a plan. These templates are STL files generated from CAD models. There is also the feature to extract new isosurfaces of different voxel values and slice the final 3D model according to a predefined plane, enabling a 2D analysis of the planned solution. Finally, we discuss how the proposed application assists the surgeon in the planning process in an alternative way, where it is possible to three-dimensionally analyze the impact of a surgical intervention on the patient.(undefined

    A Dedicated Tool for Presurgical Mapping of Brain Tumors and Mixed-Reality Navigation During Neurosurgery

    Get PDF
    Brain tumor surgery requires a delicate tradeoff between complete removal of neoplastic tissue while minimizing loss of brain function. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) have emerged as valuable tools for non-invasive assessment of human brain function and are now used to determine brain regions that should be spared to prevent functional impairment after surgery. However, image analysis requires different software packages, mainly developed for research purposes and often difficult to use in a clinical setting, preventing large-scale diffusion of presurgical mapping. We developed a specialized software able to implement an automatic analysis of multimodal MRI presurgical mapping in a single application and to transfer the results to the neuronavigator. Moreover, the imaging results are integrated in a commercially available wearable device using an optimized mixed-reality approach, automatically anchoring 3-dimensional holograms obtained from MRI with the physical head of the patient. This will allow the surgeon to virtually explore deeper tissue layers highlighting critical brain structures that need to be preserved, while retaining the natural oculo-manual coordination. The enhanced ergonomics of this procedure will significantly improve accuracy and safety of the surgery, with large expected benefits for health care systems and related industrial investors

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF

    Lead-OR: A multimodal platform for deep brain stimulation surgery

    Get PDF
    Background: Deep brain stimulation (DBS) electrode implant trajectories are stereotactically defined using preoperative neuroimaging. To validate the correct trajectory, microelectrode recordings (MERs) or local field potential recordings can be used to extend neuroanatomical information (defined by MRI) with neurophysiological activity patterns recorded from micro- and macroelectrodes probing the surgical target site. Currently, these two sources of information (imaging vs. electrophysiology) are analyzed separately, while means to fuse both data streams have not been introduced. Methods: Here, we present a tool that integrates resources from stereotactic planning, neuroimaging, MER, and high-resolution atlas data to create a real-time visualization of the implant trajectory. We validate the tool based on a retrospective cohort of DBS patients (N = 52) offline and present single-use cases of the real-time platform. Results: We establish an open-source software tool for multimodal data visualization and analysis during DBS surgery. We show a general correspondence between features derived from neuroimaging and electrophysiological recordings and present examples that demonstrate the functionality of the tool. Conclusions: This novel software platform for multimodal data visualization and analysis bears translational potential to improve accuracy of DBS surgery. The toolbox is made openly available and is extendable to integrate with additional software packages

    종양의 3차원적 위치 파악을 위한 메쉬 구조의 3D 모델링 기술 개발 및 임상적 유용성 평가

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 의과대학 의학과, 2022.2. 김희찬.배 경: 3D 프린팅은 인체 종양의 3차원적 위치를 파악하기 위한 방법으로써, 이미 의학 영역에 다양한 목적으로 보급되어 사용되고 있으나, 높은 비용과 긴 제작 시간은 3D 모델의 활용에 제약이 되고 있다. 목 적: 첫 번째 연구의 목표는 인체 장기와 그 장기가 포함하고 있는 종양의 관계를 묘사하기 위해 메쉬 구조의 3D 모델링이라고 하는 새로운 기법을 개발하는 것으로, 비용의 절감 및 출력 시간에서 장점을 보일 것으로 가정하였다. 두 번째 연구는 딥러닝을 이용한 개별화된 3D 갑상선 모델을 메쉬 구조로 제작하고 수술 전 동의를 받는 과정에 이용하여 본 기술의 임상적 유용성을 평가하고자 하였다. 방 법: 메쉬 구조의 3차원 모델링은 단층 영상에서 일정 간격으로 좌표를 추출하고, 이를 그물망(메쉬) 형태로 연결하는 구조의 레플리카를 생성한다. 인접한 해부학적 구조는 메쉬의 밀도를 변화시켜 출력하는 방식으로 구분할 수 있으며, 정상 조직과 종양은 대조되는 색상으로 표시한다. 이를 이용한 임상 연구를 위해, 수술 전 동의서 작성에 3D 모형을 이용하는 전향적 무작위 배정 대조군 비교 임상시험(KCT0005069)을 설계하였다. 갑상선 수술을 받는 환자 53명을 대상으로, 수술 동의서 작성 시 개별화된 3D 프린팅 모델을 사용하는 군과, 사용하지 않고 기존의 방식대로 동의서를 작성하는 두 그룹으로 나누었다. 이 과정에서 U-Net 기반의 딥러닝 아키텍처와 메쉬 구조의 3D 모델링 기법을 활용하여 개인화된 3D 모델을 제작하였다. 결 과: 메쉬 구조의 3D 모델링을 이용해 융합 적층 모델링(FDM) 방식의 3D 프린터를 통해 출력한 결과, 낮은 비용(0.05/cm3)과제작시간(1.73min/cm3)을보였다.실제3D프린팅된모형은수술중절제된검체와비교했을때장기종양해부학및인접조직을시각적으로구분하는데충분한수준을보였다.이후시행한전향적임상연구에서,53명의환자들의갑상선모형의평균3D프린팅시간은258.9분이었고평균제작비는환자1인당USD4.23였다.모든3차원모형은종양과갑상선의크기,위치,해부학적관계를효과적으로반영할수있었다.수술동의서작성시개별화된3D프린팅모델을제공받은그룹은4가지범주(일반지식,수술의이점,수술의위험,만족도)모두에서통계적으로유의한수준의개선을보였다(모두p<0.05).모든환자는수술후개별화된3D모델을제공받았으며,질병,수술및가능한합병증및전반적인만족도향상에도움이되었음을확인할수있었다.결론:개별화된3D갑상선모델은수술전동의서작성과정에서환자의이해와만족도를향상시키는효과적인도구가될수있었다.새롭게고안한메쉬구조의3D모델링기법은장기의크기/윤곽및종양의위치를시각화하는데효과적이었으며,이러한방법론은개별화된치료를위한해부학적모델링을용이하게하고,수술동의서작성과같은설명과정에있어,환자의효과적인의학적지식습득을도울수있음을확인하였다.Background:Asamethodofthreedimensional(3D)localizationoftumor,3Dprintingisintroducedtomedicine.However,thehighcostsandlengthyproductiontimesrequiredhavelimitedtheirapplication.Objectives:Thegoalofthefirststudywastodevelopanewandlesscostly3Dmodelingmethod,meshtype3Dmodeling,todepictorgantumorrelations.Thesecondstudywasdesignedtoevaluatetheclinicalusefulnessofapersonalizedmeshtype3Dprintedthyroidglandmodelforobtaininginformedconsent.Methods:Forthemeshtype3Dmodeling,coordinateswereextractedataspecifieddistanceintervalfromtomographicimages,connectingthemtocreatemeshworkreplicas.Adjacentconstructsweredepictedbydensityvariations,showinganatomicaltargets(i.e.,tumors)incontrastingcolors.Arandomized,controlledprospectiveclinicaltrial(KCT0005069)wasdesigned.Atotalof53patientsundergoingthyroidsurgerywererandomlyassignedtotwogroups:withorwithouta3Dprintedmodeloftheirthyroidlesionuponobtaininginformedconsent.AUNetbaseddeeplearningarchitectureandthemeshtype3Dmodelingtechniquewereusedtofabricatethepersonalized3Dmodel.Results:Toestablishthemeshtype3Dmodelingtechnique,anarrayoforgansolidtumormodelswasprintedviaaFusedDepositionModeling3Dprinteratalowercost(0.05/cm3)과 제작 시간(1.73 min/cm3)을 보였다. 실제 3D 프린팅 된 모형은 수술 중 절제된 검체와 비교했을 때 장기-종양 해부학 및 인접 조직을 시각적으로 구분하는데 충분한 수준을 보였다. 이후 시행한 전향적 임상 연구에서, 53명의 환자들의 갑상선 모형의 평균 3D 프린팅 시간은 258.9분이었고 평균 제작비는 환자 1인당 USD 4.23였다. 모든 3차원 모형은 종양과 갑상선의 크기, 위치, 해부학적 관계를 효과적으로 반영할 수 있었다. 수술 동의서 작성시 개별화된 3D 프린팅 모델을 제공받은 그룹은 4가지 범주(일반 지식, 수술의 이점, 수술의 위험, 만족도) 모두에서 통계적으로 유의한 수준의 개선을 보였다 (모두 p <0.05). 모든 환자는 수술 후 개별화된 3D 모델을 제공받았으며, 질병, 수술 및 가능한 합병증 및 전반적인 만족도 향상에 도움이 되었음을 확인할 수 있었다. 결 론: 개별화된 3D 갑상선 모델은 수술 전 동의서 작성 과정에서 환자의 이해와 만족도를 향상시키는 효과적인 도구가 될 수 있었다. 새롭게 고안한 메쉬 구조의 3D 모델링 기법은 장기의 크기/윤곽 및 종양의 위치를 시각화 하는데 효과적이었으며, 이러한 방법론은 개별화된 치료를 위한 해부학적 모델링을 용이하게 하고, 수술 동의서 작성과 같은 설명 과정에 있어, 환자의 효과적인 의학적 지식 습득을 도울 수 있음을 확인하였다.Background: As a method of three-dimensional (3D) localization of tumor, 3D printing is introduced to medicine. However, the high costs and lengthy production times required have limited their application. Objectives: The goal of the first study was to develop a new and less costly 3D modeling method, “mesh-type 3D modeling”, to depict organ–tumor relations. The second study was designed to evaluate the clinical usefulness of a personalized mesh-type 3D-printed thyroid gland model for obtaining informed consent. Methods: For the mesh-type 3D modeling, coordinates were extracted at a specified distance interval from tomographic images, connecting them to create mesh-work replicas. Adjacent constructs were depicted by density variations, showing anatomical targets (i.e., tumors) in contrasting colors. A randomized, controlled prospective clinical trial (KCT0005069) was designed. A total of 53 patients undergoing thyroid surgery were randomly assigned to two groups: with or without a 3D-printed model of their thyroid lesion upon obtaining informed consent. A U-Net-based deep learning architecture and the mesh-type 3D modeling technique were used to fabricate the personalized 3D model. Results: To establish the mesh-type 3D modeling technique, an array of organ-solid tumor models was printed via a Fused Deposition Modeling 3D printer at a lower cost (0.05 USD/cm3) and time expenditure (1.73 min/cm3). Printed models helped promote visual appreciation of organ-tumor anatomy and adjacent tissues. In the prospective clinical study, the mean 3D printing time was 258.9 min, and the mean price for production was USD 4.23 for each patient. The size, location, and anatomical relationship of the tumor with respect to the thyroid gland could be effectively presented. The group provided with personalized 3D-printed models significantly improved across all four categories (i.e., general knowledge, benefits of surgery, risks of surgery, and satisfaction; all p < .05). All patients received a personalized 3D model after surgery and found it helpful toward understanding the disease, operation, and possible complications, as well as enhancing their overall satisfaction. Conclusion: The personalized 3D-printed thyroid gland model may be an effective tool for improving a patient’s understanding and satisfaction during the informed consent process. Furthermore, the mesh-type 3D modeling reproduced glandular size/contour and tumor location, readily approximating the surgical specimen. This newly devised mesh-type 3D printing method may facilitate anatomical modeling for personalized care and improve patient awareness during informed surgical consent.Chapter 1. Introduction 1 Chapter 2. Materials and Methods 4 Chapter 3. Results 24 Chapter 4. Discussion 43 Chapter 5. Conclusions 49 Acknowledgements 50 Bibliography 51 Abstract in Korean 55박
    corecore