3,015 research outputs found

    3D mesh processing using GAMer 2 to enable reaction-diffusion simulations in realistic cellular geometries

    Full text link
    Recent advances in electron microscopy have enabled the imaging of single cells in 3D at nanometer length scale resolutions. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. Enabling such simulations requires watertight meshing of electron micrograph images into 3D volume meshes, which can then form the basis of computer simulations of such processes using numerical techniques such as the Finite Element Method. In this paper, we describe the use of our recently rewritten mesh processing software, GAMer 2, to bridge the gap between poorly conditioned meshes generated from segmented micrographs and boundary marked tetrahedral meshes which are compatible with simulation. We demonstrate the application of a workflow using GAMer 2 to a series of electron micrographs of neuronal dendrite morphology explored at three different length scales and show that the resulting meshes are suitable for finite element simulations. This work is an important step towards making physical simulations of biological processes in realistic geometries routine. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery at the interface of geometry and cellular processes. We posit that a new frontier at the intersection of computational technologies and single cell biology is now open.Comment: 39 pages, 14 figures. High resolution figures and supplemental movies available upon reques

    RGB-D datasets using microsoft kinect or similar sensors: a survey

    Get PDF
    RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms

    Cellular 3D-reconstruction and analysis in the human cerebral cortex using automatic serial sections

    Get PDF
    Techniques involving three-dimensional (3D) tissue structure reconstruction and analysis provide a better understanding of changes in molecules and function. We have developed AutoCUTS-LM, an automated system that allows the latest advances in 3D tissue reconstruction and cellular analysis developments using light microscopy on various tissues, including archived tissue. The workflow in this paper involved advanced tissue sampling methods of the human cerebral cortex, an automated serial section collection system, digital tissue library, cell detection using convolution neural network, 3D cell reconstruction, and advanced analysis. Our results demonstrated the detailed structure of pyramidal cells (number, volume, diameter, sphericity and orientation) and their 3D spatialย organization are arranged in a columnar structure. The pipeline of these combined techniques provides a detailed analysis of tissues and cells in biology and pathology

    Development and application of quantitative image analysis for preclinical MRI research

    Get PDF
    The aim of this thesis is to develop quantitative analysis methods to validate MRI and improve the detection of tumour infiltration. The major components include a description of the development the quantitative methods to better validate imaging biomarkers and detect of infiltration of tumour cells into normal tissue, which were then applied to a mouse model of glioblastoma invasion. To do this, a new histology model, called Stacked In-plane Histology (SIH), was developed to allow a quantitative analysis of MRI. Validating imaging biomarkers for glioblastoma infiltration Cancer can be defined as a disease in which a group of abnormal cells grow uncontrollably, often with fatal outcomes. According to (Cancer research UK, 2019), there are more than 363,000 new cancer cases in the UK every year, an increase from the 990 cases reported daily in 2014-2016, with only half of all patients recovering. Glioblastoma (GB) is the most frequent and malignant form of primary brain tumours with a very poor prognosis. Even with the development of modern diagnostic strategies and new therapies, the five-year survival rate is just 5%, with the median survival time only 14 months. Unfortunately, glioblastoma can affect patients at any age, including young children, but has a peak occurrence between the ages of 65 and 75 years. The standard treatment for GB consists of surgical resection, followed by radiotherapy and chemotherapy. However, the infiltration of GB cells into healthy adjacent brain tissue is a major obstacle for successful treatment, making complete removal of a tumour by surgery a difficult task, with the potential for tumour recurrence. Magnetic Resonance Imaging (MRI) is a non-invasive, multipurpose imaging tool used for the diagnosis and monitoring of cancerous tumours. It can provide morphological, physiological, and metabolic information about the tumour. Currently, MRI is the standard diagnostic tool for GB before the pathological examination of tissue from surgical resection or biopsy specimens. The standard MRI sequences used for diagnosis of GB include T2-Weighted (T2W), T1-Weighted (T1W), Fluid-Attenuated Inversion Recovery (FLAIR), and Contrast Enhance T1 gadolinium (CE-T1) scans. These conventional scans are used to localize the tumour, in addition to associated oedema and necrosis. Although these scans can identify the bulk of the tumour, it is known that they do not detect regions infiltrated by GB cells. The MRI signal depends upon many physical parameters including water content, local structure, tumbling rates, diffusion, and hypoxia (Dominietto, 2014). There has been considerable interest in identifying whether such biologically indirect image contrasts can be used as non-invasive imaging biomarkers, either for normal biological functions, pathogenic processes or pharmacological responses to therapeutic interventions (Atkinson et al., 2001). In fact, when new MRI methods are proposed as imaging biomarkers of particular diseases, it is crucial that they are validated against histopathology. In humans, such validation is limited to a biopsy, which is the gold standard of diagnosis for most types of cancer. Some types of biopsies can take an image-guided approach using MRI, Computed Tomography (CT) or Ultrasound (US). However, a biopsy may miss the most malignant region of the tumour and is difficult to repeat. Biomarker validation can be performed in preclinical disease models, where the animal can be terminated immediately after imaging for histological analysis. Here, in principle, co-registration of the biomarker images with the histopathology would allow for direct validation. However, in practice, most preclinical validation studies have been limited to using simple visual comparisons to assess the correlation between the imaging biomarker and underlying histopathology. First objective (Chapter 5): Histopathology is the gold standard for assessing non-invasive imaging biomarkers, with most validation approaches involving a qualitative visual inspection. To allow a more quantitative analysis, previous studies have attempted to co-register MRI with histology. However, these studies have focused on developing better algorithms to deal with the distortions common in histology sections. By contrast, we have taken an approach to improve the quality of the histological processing and analysis, for example, by taking into account the imaging slice orientation and thickness. Multiple histology sections were cut in the MR imaging plane to produce a Stacked In-plane Histology (SIH) map. This approach, which is applied to the next two objectives, creates a histopathology map which can be used as the gold standard to quantitatively validate imaging biomarkers. Second objective (Chapter 6): Glioblastoma is the most malignant form of primary brain tumour and recurrence following treatment is common. Non-invasive MR imaging is an important component of brain tumour diagnosis and treatment planning. Unfortunately, clinic MRI (T1W, T2W, CE-T1, and FLAIR) fails to detect regions of glioblastoma cell infiltration beyond the solid tumour region identified by contrast enhanced T1 scans. However, advanced MRI techniques such as Arterial Spin Labelling (ASL) could provide us with extra information (perfusion) which may allow better detection of infiltration. In order to assess whether local perfusion perturbation could provide a useful biomarker for glioblastoma cell infiltration, we quantitatively analysed the correlation between perfusion MRI (ASL) and stacked in-plane histology. This work used a mouse model of glioblastoma that mimics the infiltrative behaviour found in human patients. The results demonstrate the ability of perfusion imaging to probe regions of low tumour cell infiltration, while confirming the sensitivity limitations of clinic imaging modalities. Third objective (Chapter 7): It is widely hypothesised that Multiparametric MRI (mpMRI), can extract more information than is obtained from the constituent individual MR images, by reconstructing a single map that contains complementary information. Using the MRI and histology dataset from objective 2, we used a multi-regression algorithm to reconstruct a single map which was highly correlated (r>0.6) with histology. The results are promising, showing that mpMRI can better predict the whole tumour region, including the region of tumour cell infiltration

    Automatic segmentation of ventricular volume by 3D ultrasonography in post haemorrhagic ventricular dilatation among preterm infants

    Get PDF
    To train, evaluate, and validate the application of a deep learning framework in three-dimensional ultrasound (3D US) for the automatic segmentation of ventricular volume in preterm infants with post haemorrhagic ventricular dilatation (PHVD). We trained a 2D convolutional neural network (CNN) for automatic segmentation ventricular volume from 3D US of preterm infants with PHVD. The method was validated with the Dice similarity coefficient (DSC) and the intra-class coefficient (ICC) compared to manual segmentation. The mean birth weight of the included patients was 1233.1 g (SD 309.4) and mean gestational age was 28.1 weeks (SD 1.6). A total of 152 serial 3D US from 10 preterm infants with PHVD were analysed. 230 ventricles were manually segmented. Of these, 108 were used for training a 2D CNN and 122 for validating the methodology for automatic segmentation. The global agreement for manual versus automated measures in the validation data (n=122) was excellent with an ICC of 0.944 (0.874-0.971). The Dice similarity coefficient was 0.8 (+/- 0.01). 3D US based ventricular volume estimation through an automatic segmentation software developed through deep learning improves the accuracy and reduces the processing time needed for manual segmentation using VOCAL. 3D US should be considered a promising tool to help deepen our current understanding of the complex evolution of PHVD

    ์ข…์–‘์˜ 3์ฐจ์›์  ์œ„์น˜ ํŒŒ์•…์„ ์œ„ํ•œ ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง ๊ธฐ์ˆ  ๊ฐœ๋ฐœ ๋ฐ ์ž„์ƒ์  ์œ ์šฉ์„ฑ ํ‰๊ฐ€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜ํ•™๊ณผ, 2022.2. ๊น€ํฌ์ฐฌ.๋ฐฐ ๊ฒฝ: 3D ํ”„๋ฆฐํŒ…์€ ์ธ์ฒด ์ข…์–‘์˜ 3์ฐจ์›์  ์œ„์น˜๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ์จ, ์ด๋ฏธ ์˜ํ•™ ์˜์—ญ์— ๋‹ค์–‘ํ•œ ๋ชฉ์ ์œผ๋กœ ๋ณด๊ธ‰๋˜์–ด ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์œผ๋‚˜, ๋†’์€ ๋น„์šฉ๊ณผ ๊ธด ์ œ์ž‘ ์‹œ๊ฐ„์€ 3D ๋ชจ๋ธ์˜ ํ™œ์šฉ์— ์ œ์•ฝ์ด ๋˜๊ณ  ์žˆ๋‹ค. ๋ชฉ ์ : ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ์˜ ๋ชฉํ‘œ๋Š” ์ธ์ฒด ์žฅ๊ธฐ์™€ ๊ทธ ์žฅ๊ธฐ๊ฐ€ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ์ข…์–‘์˜ ๊ด€๊ณ„๋ฅผ ๋ฌ˜์‚ฌํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง์ด๋ผ๊ณ  ํ•˜๋Š” ์ƒˆ๋กœ์šด ๊ธฐ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์œผ๋กœ, ๋น„์šฉ์˜ ์ ˆ๊ฐ ๋ฐ ์ถœ๋ ฅ ์‹œ๊ฐ„์—์„œ ์žฅ์ ์„ ๋ณด์ผ ๊ฒƒ์œผ๋กœ ๊ฐ€์ •ํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ๊ฐœ๋ณ„ํ™”๋œ 3D ๊ฐ‘์ƒ์„  ๋ชจ๋ธ์„ ๋ฉ”์‰ฌ ๊ตฌ์กฐ๋กœ ์ œ์ž‘ํ•˜๊ณ  ์ˆ˜์ˆ  ์ „ ๋™์˜๋ฅผ ๋ฐ›๋Š” ๊ณผ์ •์— ์ด์šฉํ•˜์—ฌ ๋ณธ ๊ธฐ์ˆ ์˜ ์ž„์ƒ์  ์œ ์šฉ์„ฑ์„ ํ‰๊ฐ€ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋ฐฉ ๋ฒ•: ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3์ฐจ์› ๋ชจ๋ธ๋ง์€ ๋‹จ์ธต ์˜์ƒ์—์„œ ์ผ์ • ๊ฐ„๊ฒฉ์œผ๋กœ ์ขŒํ‘œ๋ฅผ ์ถ”์ถœํ•˜๊ณ , ์ด๋ฅผ ๊ทธ๋ฌผ๋ง(๋ฉ”์‰ฌ) ํ˜•ํƒœ๋กœ ์—ฐ๊ฒฐํ•˜๋Š” ๊ตฌ์กฐ์˜ ๋ ˆํ”Œ๋ฆฌ์นด๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ธ์ ‘ํ•œ ํ•ด๋ถ€ํ•™์  ๊ตฌ์กฐ๋Š” ๋ฉ”์‰ฌ์˜ ๋ฐ€๋„๋ฅผ ๋ณ€ํ™”์‹œ์ผœ ์ถœ๋ ฅํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ •์ƒ ์กฐ์ง๊ณผ ์ข…์–‘์€ ๋Œ€์กฐ๋˜๋Š” ์ƒ‰์ƒ์œผ๋กœ ํ‘œ์‹œํ•œ๋‹ค. ์ด๋ฅผ ์ด์šฉํ•œ ์ž„์ƒ ์—ฐ๊ตฌ๋ฅผ ์œ„ํ•ด, ์ˆ˜์ˆ  ์ „ ๋™์˜์„œ ์ž‘์„ฑ์— 3D ๋ชจํ˜•์„ ์ด์šฉํ•˜๋Š” ์ „ํ–ฅ์  ๋ฌด์ž‘์œ„ ๋ฐฐ์ • ๋Œ€์กฐ๊ตฐ ๋น„๊ต ์ž„์ƒ์‹œํ—˜(KCT0005069)์„ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๊ฐ‘์ƒ์„  ์ˆ˜์ˆ ์„ ๋ฐ›๋Š” ํ™˜์ž 53๋ช…์„ ๋Œ€์ƒ์œผ๋กœ, ์ˆ˜์ˆ  ๋™์˜์„œ ์ž‘์„ฑ ์‹œ ๊ฐœ๋ณ„ํ™”๋œ 3D ํ”„๋ฆฐํŒ… ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ตฐ๊ณผ, ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ๊ธฐ์กด์˜ ๋ฐฉ์‹๋Œ€๋กœ ๋™์˜์„œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋‘ ๊ทธ๋ฃน์œผ๋กœ ๋‚˜๋ˆ„์—ˆ๋‹ค. ์ด ๊ณผ์ •์—์„œ U-Net ๊ธฐ๋ฐ˜์˜ ๋”ฅ๋Ÿฌ๋‹ ์•„ํ‚คํ…์ฒ˜์™€ ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง ๊ธฐ๋ฒ•์„ ํ™œ์šฉํ•˜์—ฌ ๊ฐœ์ธํ™”๋œ 3D ๋ชจ๋ธ์„ ์ œ์ž‘ํ•˜์˜€๋‹ค. ๊ฒฐ ๊ณผ: ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง์„ ์ด์šฉํ•ด ์œตํ•ฉ ์ ์ธต ๋ชจ๋ธ๋ง(FDM) ๋ฐฉ์‹์˜ 3D ํ”„๋ฆฐํ„ฐ๋ฅผ ํ†ตํ•ด ์ถœ๋ ฅํ•œ ๊ฒฐ๊ณผ, ๋‚ฎ์€ ๋น„์šฉ(0.05/cm3)๊ณผ์ œ์ž‘์‹œ๊ฐ„(1.73min/cm3)์„๋ณด์˜€๋‹ค.์‹ค์ œ3Dํ”„๋ฆฐํŒ…๋œ๋ชจํ˜•์€์ˆ˜์ˆ ์ค‘์ ˆ์ œ๋œ๊ฒ€์ฒด์™€๋น„๊ตํ–ˆ์„๋•Œ์žฅ๊ธฐโˆ’์ข…์–‘ํ•ด๋ถ€ํ•™๋ฐ์ธ์ ‘์กฐ์ง์„์‹œ๊ฐ์ ์œผ๋กœ๊ตฌ๋ถ„ํ•˜๋Š”๋ฐ์ถฉ๋ถ„ํ•œ์ˆ˜์ค€์„๋ณด์˜€๋‹ค.์ดํ›„์‹œํ–‰ํ•œ์ „ํ–ฅ์ ์ž„์ƒ์—ฐ๊ตฌ์—์„œ,53๋ช…์˜ํ™˜์ž๋“ค์˜๊ฐ‘์ƒ์„ ๋ชจํ˜•์˜ํ‰๊ท 3Dํ”„๋ฆฐํŒ…์‹œ๊ฐ„์€258.9๋ถ„์ด์—ˆ๊ณ ํ‰๊ท ์ œ์ž‘๋น„๋Š”ํ™˜์ž1์ธ๋‹นUSD4.23์˜€๋‹ค.๋ชจ๋“ 3์ฐจ์›๋ชจํ˜•์€์ข…์–‘๊ณผ๊ฐ‘์ƒ์„ ์˜ํฌ๊ธฐ,์œ„์น˜,ํ•ด๋ถ€ํ•™์ ๊ด€๊ณ„๋ฅผํšจ๊ณผ์ ์œผ๋กœ๋ฐ˜์˜ํ• ์ˆ˜์žˆ์—ˆ๋‹ค.์ˆ˜์ˆ ๋™์˜์„œ์ž‘์„ฑ์‹œ๊ฐœ๋ณ„ํ™”๋œ3Dํ”„๋ฆฐํŒ…๋ชจ๋ธ์„์ œ๊ณต๋ฐ›์€๊ทธ๋ฃน์€4๊ฐ€์ง€๋ฒ”์ฃผ(์ผ๋ฐ˜์ง€์‹,์ˆ˜์ˆ ์˜์ด์ ,์ˆ˜์ˆ ์˜์œ„ํ—˜,๋งŒ์กฑ๋„)๋ชจ๋‘์—์„œํ†ต๊ณ„์ ์œผ๋กœ์œ ์˜ํ•œ์ˆ˜์ค€์˜๊ฐœ์„ ์„๋ณด์˜€๋‹ค(๋ชจ๋‘p<0.05).๋ชจ๋“ ํ™˜์ž๋Š”์ˆ˜์ˆ ํ›„๊ฐœ๋ณ„ํ™”๋œ3D๋ชจ๋ธ์„์ œ๊ณต๋ฐ›์•˜์œผ๋ฉฐ,์งˆ๋ณ‘,์ˆ˜์ˆ ๋ฐ๊ฐ€๋Šฅํ•œํ•ฉ๋ณ‘์ฆ๋ฐ์ „๋ฐ˜์ ์ธ๋งŒ์กฑ๋„ํ–ฅ์ƒ์—๋„์›€์ด๋˜์—ˆ์Œ์„ํ™•์ธํ• ์ˆ˜์žˆ์—ˆ๋‹ค.๊ฒฐ๋ก :๊ฐœ๋ณ„ํ™”๋œ3D๊ฐ‘์ƒ์„ ๋ชจ๋ธ์€์ˆ˜์ˆ ์ „๋™์˜์„œ์ž‘์„ฑ๊ณผ์ •์—์„œํ™˜์ž์˜์ดํ•ด์™€๋งŒ์กฑ๋„๋ฅผํ–ฅ์ƒ์‹œํ‚ค๋Š”ํšจ๊ณผ์ ์ธ๋„๊ตฌ๊ฐ€๋ ์ˆ˜์žˆ์—ˆ๋‹ค.์ƒˆ๋กญ๊ฒŒ๊ณ ์•ˆํ•œ๋ฉ”์‰ฌ๊ตฌ์กฐ์˜3D๋ชจ๋ธ๋ง๊ธฐ๋ฒ•์€์žฅ๊ธฐ์˜ํฌ๊ธฐ/์œค๊ณฝ๋ฐ์ข…์–‘์˜์œ„์น˜๋ฅผ์‹œ๊ฐํ™”ํ•˜๋Š”๋ฐํšจ๊ณผ์ ์ด์—ˆ์œผ๋ฉฐ,์ด๋Ÿฌํ•œ๋ฐฉ๋ฒ•๋ก ์€๊ฐœ๋ณ„ํ™”๋œ์น˜๋ฃŒ๋ฅผ์œ„ํ•œํ•ด๋ถ€ํ•™์ ๋ชจ๋ธ๋ง์„์šฉ์ดํ•˜๊ฒŒํ•˜๊ณ ,์ˆ˜์ˆ ๋™์˜์„œ์ž‘์„ฑ๊ณผ๊ฐ™์€์„ค๋ช…๊ณผ์ •์—์žˆ์–ด,ํ™˜์ž์˜ํšจ๊ณผ์ ์ธ์˜ํ•™์ ์ง€์‹์Šต๋“์„๋„์šธ์ˆ˜์žˆ์Œ์„ํ™•์ธํ•˜์˜€๋‹ค.Background:Asamethodofthreeโˆ’dimensional(3D)localizationoftumor,3Dprintingisintroducedtomedicine.However,thehighcostsandlengthyproductiontimesrequiredhavelimitedtheirapplication.Objectives:Thegoalofthefirststudywastodevelopanewandlesscostly3Dmodelingmethod,โ€œmeshโˆ’type3Dmodelingโ€,todepictorganโ€“tumorrelations.Thesecondstudywasdesignedtoevaluatetheclinicalusefulnessofapersonalizedmeshโˆ’type3Dโˆ’printedthyroidglandmodelforobtaininginformedconsent.Methods:Forthemeshโˆ’type3Dmodeling,coordinateswereextractedataspecifieddistanceintervalfromtomographicimages,connectingthemtocreatemeshโˆ’workreplicas.Adjacentconstructsweredepictedbydensityvariations,showinganatomicaltargets(i.e.,tumors)incontrastingcolors.Arandomized,controlledprospectiveclinicaltrial(KCT0005069)wasdesigned.Atotalof53patientsundergoingthyroidsurgerywererandomlyassignedtotwogroups:withorwithouta3Dโˆ’printedmodeloftheirthyroidlesionuponobtaininginformedconsent.AUโˆ’Netโˆ’baseddeeplearningarchitectureandthemeshโˆ’type3Dmodelingtechniquewereusedtofabricatethepersonalized3Dmodel.Results:Toestablishthemeshโˆ’type3Dmodelingtechnique,anarrayoforganโˆ’solidtumormodelswasprintedviaaFusedDepositionModeling3Dprinteratalowercost(0.05/cm3)๊ณผ ์ œ์ž‘ ์‹œ๊ฐ„(1.73 min/cm3)์„ ๋ณด์˜€๋‹ค. ์‹ค์ œ 3D ํ”„๋ฆฐํŒ… ๋œ ๋ชจํ˜•์€ ์ˆ˜์ˆ  ์ค‘ ์ ˆ์ œ๋œ ๊ฒ€์ฒด์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ์žฅ๊ธฐ-์ข…์–‘ ํ•ด๋ถ€ํ•™ ๋ฐ ์ธ์ ‘ ์กฐ์ง์„ ์‹œ๊ฐ์ ์œผ๋กœ ๊ตฌ๋ถ„ํ•˜๋Š”๋ฐ ์ถฉ๋ถ„ํ•œ ์ˆ˜์ค€์„ ๋ณด์˜€๋‹ค. ์ดํ›„ ์‹œํ–‰ํ•œ ์ „ํ–ฅ์  ์ž„์ƒ ์—ฐ๊ตฌ์—์„œ, 53๋ช…์˜ ํ™˜์ž๋“ค์˜ ๊ฐ‘์ƒ์„  ๋ชจํ˜•์˜ ํ‰๊ท  3D ํ”„๋ฆฐํŒ… ์‹œ๊ฐ„์€ 258.9๋ถ„์ด์—ˆ๊ณ  ํ‰๊ท  ์ œ์ž‘๋น„๋Š” ํ™˜์ž 1์ธ๋‹น USD 4.23์˜€๋‹ค. ๋ชจ๋“  3์ฐจ์› ๋ชจํ˜•์€ ์ข…์–‘๊ณผ ๊ฐ‘์ƒ์„ ์˜ ํฌ๊ธฐ, ์œ„์น˜, ํ•ด๋ถ€ํ•™์  ๊ด€๊ณ„๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋ฐ˜์˜ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ˆ˜์ˆ  ๋™์˜์„œ ์ž‘์„ฑ์‹œ ๊ฐœ๋ณ„ํ™”๋œ 3D ํ”„๋ฆฐํŒ… ๋ชจ๋ธ์„ ์ œ๊ณต๋ฐ›์€ ๊ทธ๋ฃน์€ 4๊ฐ€์ง€ ๋ฒ”์ฃผ(์ผ๋ฐ˜ ์ง€์‹, ์ˆ˜์ˆ ์˜ ์ด์ , ์ˆ˜์ˆ ์˜ ์œ„ํ—˜, ๋งŒ์กฑ๋„) ๋ชจ๋‘์—์„œ ํ†ต๊ณ„์ ์œผ๋กœ ์œ ์˜ํ•œ ์ˆ˜์ค€์˜ ๊ฐœ์„ ์„ ๋ณด์˜€๋‹ค (๋ชจ๋‘ p <0.05). ๋ชจ๋“  ํ™˜์ž๋Š” ์ˆ˜์ˆ  ํ›„ ๊ฐœ๋ณ„ํ™”๋œ 3D ๋ชจ๋ธ์„ ์ œ๊ณต๋ฐ›์•˜์œผ๋ฉฐ, ์งˆ๋ณ‘, ์ˆ˜์ˆ  ๋ฐ ๊ฐ€๋Šฅํ•œ ํ•ฉ๋ณ‘์ฆ ๋ฐ ์ „๋ฐ˜์ ์ธ ๋งŒ์กฑ๋„ ํ–ฅ์ƒ์— ๋„์›€์ด ๋˜์—ˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๊ฒฐ ๋ก : ๊ฐœ๋ณ„ํ™”๋œ 3D ๊ฐ‘์ƒ์„  ๋ชจ๋ธ์€ ์ˆ˜์ˆ  ์ „ ๋™์˜์„œ ์ž‘์„ฑ ๊ณผ์ •์—์„œ ํ™˜์ž์˜ ์ดํ•ด์™€ ๋งŒ์กฑ๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ํšจ๊ณผ์ ์ธ ๋„๊ตฌ๊ฐ€ ๋  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ƒˆ๋กญ๊ฒŒ ๊ณ ์•ˆํ•œ ๋ฉ”์‰ฌ ๊ตฌ์กฐ์˜ 3D ๋ชจ๋ธ๋ง ๊ธฐ๋ฒ•์€ ์žฅ๊ธฐ์˜ ํฌ๊ธฐ/์œค๊ณฝ ๋ฐ ์ข…์–‘์˜ ์œ„์น˜๋ฅผ ์‹œ๊ฐํ™” ํ•˜๋Š”๋ฐ ํšจ๊ณผ์ ์ด์—ˆ์œผ๋ฉฐ, ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋ก ์€ ๊ฐœ๋ณ„ํ™”๋œ ์น˜๋ฃŒ๋ฅผ ์œ„ํ•œ ํ•ด๋ถ€ํ•™์  ๋ชจ๋ธ๋ง์„ ์šฉ์ดํ•˜๊ฒŒ ํ•˜๊ณ , ์ˆ˜์ˆ  ๋™์˜์„œ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์„ค๋ช… ๊ณผ์ •์— ์žˆ์–ด, ํ™˜์ž์˜ ํšจ๊ณผ์ ์ธ ์˜ํ•™์  ์ง€์‹ ์Šต๋“์„ ๋„์šธ ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.Background: As a method of three-dimensional (3D) localization of tumor, 3D printing is introduced to medicine. However, the high costs and lengthy production times required have limited their application. Objectives: The goal of the first study was to develop a new and less costly 3D modeling method, โ€œmesh-type 3D modelingโ€, to depict organโ€“tumor relations. The second study was designed to evaluate the clinical usefulness of a personalized mesh-type 3D-printed thyroid gland model for obtaining informed consent. Methods: For the mesh-type 3D modeling, coordinates were extracted at a specified distance interval from tomographic images, connecting them to create mesh-work replicas. Adjacent constructs were depicted by density variations, showing anatomical targets (i.e., tumors) in contrasting colors. A randomized, controlled prospective clinical trial (KCT0005069) was designed. A total of 53 patients undergoing thyroid surgery were randomly assigned to two groups: with or without a 3D-printed model of their thyroid lesion upon obtaining informed consent. A U-Net-based deep learning architecture and the mesh-type 3D modeling technique were used to fabricate the personalized 3D model. Results: To establish the mesh-type 3D modeling technique, an array of organ-solid tumor models was printed via a Fused Deposition Modeling 3D printer at a lower cost (0.05 USD/cm3) and time expenditure (1.73 min/cm3). Printed models helped promote visual appreciation of organ-tumor anatomy and adjacent tissues. In the prospective clinical study, the mean 3D printing time was 258.9 min, and the mean price for production was USD 4.23 for each patient. The size, location, and anatomical relationship of the tumor with respect to the thyroid gland could be effectively presented. The group provided with personalized 3D-printed models significantly improved across all four categories (i.e., general knowledge, benefits of surgery, risks of surgery, and satisfaction; all p < .05). All patients received a personalized 3D model after surgery and found it helpful toward understanding the disease, operation, and possible complications, as well as enhancing their overall satisfaction. Conclusion: The personalized 3D-printed thyroid gland model may be an effective tool for improving a patientโ€™s understanding and satisfaction during the informed consent process. Furthermore, the mesh-type 3D modeling reproduced glandular size/contour and tumor location, readily approximating the surgical specimen. This newly devised mesh-type 3D printing method may facilitate anatomical modeling for personalized care and improve patient awareness during informed surgical consent.Chapter 1. Introduction 1 Chapter 2. Materials and Methods 4 Chapter 3. Results 24 Chapter 4. Discussion 43 Chapter 5. Conclusions 49 Acknowledgements 50 Bibliography 51 Abstract in Korean 55๋ฐ•
    • โ€ฆ
    corecore