218 research outputs found

    λ‡Œμ˜ λ―ΈΒ·κ±°μ‹œμ  νŠΉμ„±μ„ μ΄μš©ν•œ 자기곡λͺ…μ˜μƒμ˜ λ”₯λŸ¬λ‹ 이미지 κ°„ λ³€ν™˜ 기법

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μžμ—°κ³Όν•™λŒ€ν•™ λ‡ŒμΈμ§€κ³Όν•™κ³Ό, 2023. 8. μ°¨μ§€μš±.λ‡ŒλŠ” μ—¬λŸ¬ λ‡Œ μ˜μ—­μ˜ κ³ λ„λ‘œ κ΅­μ†Œν™”λœ κΈ°λŠ₯κ³Ό μ‹ κ²½ 연결을 ν†΅ν•œ μ˜μ—­μ˜ ν†΅ν•©μœΌλ‘œ κ΅¬μ„±λœλ‹€. μ΄λŸ¬ν•œ λ‡Œμ˜ μ‹ κ²½ 연결은 λŠμž„μ—†μ΄ λ³€ν™”ν•˜λŠ” ν™˜κ²½μ— 효과적으둜 λŒ€μ‘ν•˜κΈ° μœ„ν•΄ μ‹œμŠ€ν…œ 및 μ‹œλƒ…μŠ€ μˆ˜μ€€μ—μ„œ μ§€μ†μ μœΌλ‘œ λ³€ν™”ν•œλ‹€. μ΄λŸ¬ν•œ 동적 μƒν˜Έμž‘μš©μ„ κ°€λŠ₯ν•˜κ²Œ ν•˜λŠ” μ€‘μš”ν•œ μš”μ†Œ 쀑 ν•˜λ‚˜λŠ” 인간 λ‘λ‡Œμ˜ κ±°μ‹œμ  및 λ―Έμ‹œμ  규λͺ¨μ—μ„œμ˜ ꡬ쑰적 κ°€μ†Œμ„±μ΄λ‹€. λ‡Œμ˜ κ±°μ‹œμ  및 미세적 κ΅¬μ‘°λŠ” μ„œλ‘œ λ‹€λ₯΄μ§€λ§Œ 보완적인 정보λ₯Ό μ „λ‹¬ν•˜κΈ° λ•Œλ¬Έμ— 두 ꡬ쑰λ₯Ό λͺ¨λ‘ κ³ λ €ν•˜λŠ” 것은 λ‡Œμ˜ ꡬ쑰적 κ°€μ†Œμ„±κ³Ό 인지 μž‘μ—… 쀑 연결성을 μ΄ν•΄ν•˜λŠ”λ° 맀우 μ€‘μš”ν•˜λ‹€. ν•˜μ§€λ§Œ 기쑴의 μ—°κ΅¬λŠ” 이λ₯Ό 효과적으둜 κ³ λ €ν•˜μ§€ λͺ»ν–ˆλ‹€. λ³Έ 연ꡬ에선 인간 λ‡Œμ˜ κ±°μ‹œμ  및 미세적 ꡬ쑰λ₯Ό ν†΅ν•©ν•˜μ—¬ 기쑴에 μ•Œμ§€ λͺ»ν–ˆλ˜ 인간 λ‡Œμ˜ μ˜λ―Έμ™€ μƒˆλ‘œμš΄ ν‘œν˜„ν˜•μ„ μ–»κΈ° μœ„ν•΄ ꡬ쑰 MRI (sMRI)μ—μ„œ κ³ ν’ˆμ§ˆ ν™•μ‚° ν…μ„œ 이미징 (DTI) 및 νŠΈλž™ν† κ·Έλž˜ν”Όλ₯Ό μƒμ„±ν•˜λ„λ‘ μ„€κ³„λœ μƒˆλ‘œμš΄ λ”₯λŸ¬λ‹ ν”„λ ˆμž„μ›Œν¬μΈ Macro2Microλ₯Ό μ œμ‹œν•œλ‹€. λ³Έ μ—°κ΅¬λŠ” κ±°μ‹œμ  κ΅¬μ‘°λ‘œλΆ€ν„° λ―Έμ‹œμ  ꡬ쑰 정보λ₯Ό μœ μΆ”ν•  수 μžˆλ‹€λŠ” 가섀을 μ „μ œλ‘œ ν•˜μ—¬, μ΄ˆκΈ°μ— ν•œ 가지 μ˜μƒ κΈ°λ²•λ§Œ νšλ“ν•˜λ”λΌλ„ μ§ˆλ³‘ 진단 및 연ꡬ에 μœ μ΅ν•œ 좔가적인 μ˜μƒ 기법을 μƒμ„±ν•œλ‹€. μ‹ κ²½ μ˜μƒ μ˜μ—­μ—μ„œ μ „λ‘€κ°€ μ—†λŠ” 이 μ ‘κ·Ό 방식은 λ©€ν‹°λͺ¨λ‹¬ 이미지 λ²ˆμ—­μ˜ 이점을 ν™œμš©ν•˜μ—¬ μƒλ‹Ήν•œ μ‹œκ°„κ³Ό λΉ„μš©μ„ μ ˆκ°ν•œλ‹€. Macro2MicroλŠ” 3D T1을 μž…λ ₯으둜 μ‚¬μš©ν•˜μ—¬ 2D T1 슬라이슀λ₯Ό μƒμ„±ν•œ λ‹€μŒ μ λŒ€μ  생성망(GAN)을 톡해 μ²˜λ¦¬λ˜μ–΄ 2D DTI (FA) μŠ¬λΌμ΄μŠ€μ™€ 2D νŠΈλž™ν† κ·Έλž˜ν”Ό 이미지λ₯Ό μƒμ„±ν•œλ‹€. 이 ν”„λ‘œμ„ΈμŠ€μ˜ 핡심 μš”μ†ŒλŠ” 이미지 νŠΉμ„±μ„ 주파수 λŒ€μ—­μ— 따라 λΆ„λ¦¬ν•˜λŠ” μ˜₯νƒ€λΈŒ 합성곱을 μ‚¬μš©ν•˜λŠ” 것이닀. 이 ν”„λ ˆμž„μ›Œν¬λŠ” μ²­μ†Œλ…„ λ‡Œ 인지 λ°œλ‹¬(ABCD) 데이터λ₯Ό μ‚¬μš©ν•˜μ—¬ ν›ˆλ ¨λ˜μ—ˆμœΌλ©°, 이미지 ν”½μ…€ 손싀, 지각 손싀, GAN 손싀 및 λ‡Œ 쀑심 패치 GAN 손싀을 톡해 ν›ˆλ ¨ 손싀이 μ •μ˜λ˜μ—ˆλ‹€. λ³Έ μ—°κ΅¬μ˜ κ²°κ³ΌλŠ” μ •λŸ‰μ  및 μ •μ„±μ μœΌλ‘œ λ›°μ–΄λ‚œ μ„±λŠ₯을 λ³΄μ˜€μœΌλ©° λ‹¨μˆœνžˆ 이미지 λΆ„ν¬λ§Œμ„ ν•™μŠ΅ν•œ 것이 μ•„λ‹Œ λ‡Œμ˜ ꡬ쑰적 및 λ―Έμ‹œμ  ꡬ쑰의 생물학적인 νŠΉμ„±κΉŒμ§€ ν•™μŠ΅ν–ˆλ‹€λŠ” μ μ—μ„œ μ˜μ˜κ°€ μžˆλ‹€. 이미지 λ³€ν™˜ λͺ¨λΈμ„ 데이터 μ¦λŒ€ λ°©λ²•μœΌλ‘œ 잠재적으둜 μ μš©ν•˜λ©΄ 데이터 λΆˆκ· ν˜• 및 ν¬μ†Œμ„± 문제λ₯Ό ν•΄κ²°ν•  수 μžˆλ‹€. 이 μ—°κ΅¬λŠ” λ°œμ „ν•˜λŠ” μ§ˆλ³‘ λͺ¨λΈλ§μ—μ„œ 닀쀑 λͺ¨λ“œ 이미징, 특히 T1, DTI 및 νŠΈλž™ν† κ·Έλž˜ν”Όμ˜ μ‘°ν•© μ‚¬μš©μ˜ 잠재λ ₯을 μ œμ‹œν•œλ‹€.The brain consists of the highly localized functions of several brain regions and the integration of these regions through neural connections. These brain neural connections are constantly changing at the systemic and synaptic levels to effectively respond to the ever-changing environment. One of the key factors enabling these dynamic interactions is the structural plasticity of the human brain at the macro and micro scale. Because the brain's macro- and micro-structures convey different but complementary information, considering both structures is critical to understanding the brain's structural plasticity and connectivity during cognitive tasks. However, previous studies have not effectively considered this issue. In this study, a novel deep learning framework, Macro2Micro, is proposed to generate high-quality Diffusion Tensor Imaging (DTI) and tractography from structural MRI (sMRI). The study is premised on the hypothesis that micro-scale structural information can be inferred from macro-scale structures, enabling the generation of different imaging modalities beneficial for disease diagnosis and research, even when only one modality is initially obtained. This approach, unprecedented in the realm of neuroimaging, leverages the benefits of cross-modality image translation, offering significant time and cost savings. The Macro2Micro framework utilizes 3D T1 to generate 2D T1 slices as input, which are then processed through a Generative Adversarial Network (GAN) to produce 2D DTI (FA) slices and subsequently 2D tractography. The key element of this process is the use of Octave Convolutions, which facilitate the analysis of connections between various scale MR modalities. The framework was trained using the Adolescent Brain Cognitive Development (ABCD) dataset, with training losses evaluated through Image Pixel loss, Perceptual loss, GAN loss, and brain-focused patch GAN loss. The results not only showed superior performance compared to other algorithms quantitatively and qualitatively but also have significant meaning in neuroscience in that they learned not only the image distribution but also the biological characteristics of the structural and microscopic structures of the brain. The potential application of this image translation model as a data augmentation method could address issues of data imbalance and scarcity. This research underscores the potential of multimodal imaging, specifically the combined use of T1, DTI, and tractography, in advancing disease modeling.Chapter 1. INTRODUCTION 1 Chapter 2. RELATED WORK 7 Chapter 3. METHOD 15 3.1. Architecture Overview 3.2. Octave Convolution 3.3. Networks 3.4. Training Losses 3.5. Image Quality Metrics 3.6. Comparison of generated and real FA images in low-dimensional representation 3.7. Prediction of biological and cognitive variables using predicted FA images 3.8. Prediction of Tractography from FA images 3.9. Experimental Settings Chapter 4. RESULTS 26 4.1. Qualitative Evaluation 4.2. Quantitative Evaluation 4.3. Generated FA images by Macro2Micro can efficiently predict sex, ADHD, and intelligence 4.4 Ablation Studies 4.5. Effectiveness of Macro2Micro along the distance from the center of the brain 4.6. FA Image Translation to Tractography Chapter 5. DISCUSSION AND CONCLUSIONS 40 Bibliography 45 ꡭ문초둝 52석

    A Deep Network for Explainable Prediction of Non-Imaging Phenotypes using Anatomical Multi-View Data

    Full text link
    Large datasets often contain multiple distinct feature sets, or views, that offer complementary information that can be exploited by multi-view learning methods to improve results. We investigate anatomical multi-view data, where each brain anatomical structure is described with multiple feature sets. In particular, we focus on sets of white matter microstructure and connectivity features from diffusion MRI, as well as sets of gray matter area and thickness features from structural MRI. We investigate machine learning methodology that applies multi-view approaches to improve the prediction of non-imaging phenotypes, including demographics (age), motor (strength), and cognition (picture vocabulary). We present an explainable multi-view network (EMV-Net) that can use different anatomical views to improve prediction performance. In this network, each individual anatomical view is processed by a view-specific feature extractor and the extracted information from each view is fused using a learnable weight. This is followed by a wavelet transform-based module to obtain complementary information across views which is then applied to calibrate the view-specific information. Additionally, the calibrator produces an attention-based calibration score to indicate anatomical structures' importance for interpretation.Comment: 2023 The Medical Image Computing and Computer Assisted Intervention Society worksho

    HA-HI: Synergising fMRI and DTI through Hierarchical Alignments and Hierarchical Interactions for Mild Cognitive Impairment Diagnosis

    Full text link
    Early diagnosis of mild cognitive impairment (MCI) and subjective cognitive decline (SCD) utilizing multi-modal magnetic resonance imaging (MRI) is a pivotal area of research. While various regional and connectivity features from functional MRI (fMRI) and diffusion tensor imaging (DTI) have been employed to develop diagnosis models, most studies integrate these features without adequately addressing their alignment and interactions. This limits the potential to fully exploit the synergistic contributions of combined features and modalities. To solve this gap, our study introduces a novel Hierarchical Alignments and Hierarchical Interactions (HA-HI) method for MCI and SCD classification, leveraging the combined strengths of fMRI and DTI. HA-HI efficiently learns significant MCI- or SCD- related regional and connectivity features by aligning various feature types and hierarchically maximizing their interactions. Furthermore, to enhance the interpretability of our approach, we have developed the Synergistic Activation Map (SAM) technique, revealing the critical brain regions and connections that are indicative of MCI/SCD. Comprehensive evaluations on the ADNI dataset and our self-collected data demonstrate that HA-HI outperforms other existing methods in diagnosing MCI and SCD, making it a potentially vital and interpretable tool for early detection. The implementation of this method is publicly accessible at https://github.com/ICI-BCI/Dual-MRI-HA-HI.git

    3D Deep Learning on Medical Images: A Review

    Full text link
    The rapid advancements in machine learning, graphics processing technologies and availability of medical imaging data has led to a rapid increase in use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, give a brief mathematical description of 3D CNN and the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection, and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models, in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(β‰ˆ\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev

    Brain MRI-to-PET Synthesis using 3D Convolutional Attention Networks

    Full text link
    Accurate quantification of cerebral blood flow (CBF) is essential for the diagnosis and assessment of a wide range of neurological diseases. Positron emission tomography (PET) with radiolabeled water (15O-water) is considered the gold-standard for the measurement of CBF in humans. PET imaging, however, is not widely available because of its prohibitive costs and use of short-lived radiopharmaceutical tracers that typically require onsite cyclotron production. Magnetic resonance imaging (MRI), in contrast, is more readily accessible and does not involve ionizing radiation. This study presents a convolutional encoder-decoder network with attention mechanisms to predict gold-standard 15O-water PET CBF from multi-sequence MRI scans, thereby eliminating the need for radioactive tracers. Inputs to the prediction model include several commonly used MRI sequences (T1-weighted, T2-FLAIR, and arterial spin labeling). The model was trained and validated using 5-fold cross-validation in a group of 126 subjects consisting of healthy controls and cerebrovascular disease patients, all of whom underwent simultaneous $15O-water PET/MRI. The results show that such a model can successfully synthesize high-quality PET CBF measurements (with an average SSIM of 0.924 and PSNR of 38.8 dB) and is more accurate compared to concurrent and previous PET synthesis methods. We also demonstrate the clinical significance of the proposed algorithm by evaluating the agreement for identifying the vascular territories with abnormally low CBF. Such methods may enable more widespread and accurate CBF evaluation in larger cohorts who cannot undergo PET imaging due to radiation concerns, lack of access, or logistic challenges.Comment: 19 pages, 14 figure
    • …
    corecore