218 research outputs found
λμ λ―ΈΒ·κ±°μμ νΉμ±μ μ΄μ©ν μ기곡λͺ μμμ λ₯λ¬λ μ΄λ―Έμ§ κ° λ³ν κΈ°λ²
νμλ
Όλ¬Έ(μμ¬) -- μμΈλνκ΅λνμ : μμ°κ³Όνλν λμΈμ§κ³Όνκ³Ό, 2023. 8. μ°¨μ§μ±.λλ μ¬λ¬ λ μμμ κ³ λλ‘ κ΅μνλ κΈ°λ₯κ³Ό μ κ²½ μ°κ²°μ ν΅ν μμμ ν΅ν©μΌλ‘ ꡬμ±λλ€. μ΄λ¬ν λμ μ κ²½ μ°κ²°μ λμμμ΄ λ³ννλ νκ²½μ ν¨κ³Όμ μΌλ‘ λμνκΈ° μν΄ μμ€ν
λ° μλ
μ€ μμ€μμ μ§μμ μΌλ‘ λ³ννλ€. μ΄λ¬ν λμ μνΈμμ©μ κ°λ₯νκ² νλ μ€μν μμ μ€ νλλ μΈκ° λλμ κ±°μμ λ° λ―Έμμ κ·λͺ¨μμμ ꡬ쑰μ κ°μμ±μ΄λ€. λμ κ±°μμ λ° λ―ΈμΈμ ꡬ쑰λ μλ‘ λ€λ₯΄μ§λ§ 보μμ μΈ μ 보λ₯Ό μ λ¬νκΈ° λλ¬Έμ λ ꡬ쑰λ₯Ό λͺ¨λ κ³ λ €νλ κ²μ λμ ꡬ쑰μ κ°μμ±κ³Ό μΈμ§ μμ
μ€ μ°κ²°μ±μ μ΄ν΄νλλ° λ§€μ° μ€μνλ€. νμ§λ§ κΈ°μ‘΄μ μ°κ΅¬λ μ΄λ₯Ό ν¨κ³Όμ μΌλ‘ κ³ λ €νμ§ λͺ»νλ€.
λ³Έ μ°κ΅¬μμ μΈκ° λμ κ±°μμ λ° λ―ΈμΈμ ꡬ쑰λ₯Ό ν΅ν©νμ¬ κΈ°μ‘΄μ μμ§ λͺ»νλ μΈκ° λμ μλ―Έμ μλ‘μ΄ νννμ μ»κΈ° μν΄ κ΅¬μ‘° MRI (sMRI)μμ κ³ νμ§ νμ° ν
μ μ΄λ―Έμ§ (DTI) λ° νΈλν κ·ΈλνΌλ₯Ό μμ±νλλ‘ μ€κ³λ μλ‘μ΄ λ₯λ¬λ νλ μμν¬μΈ Macro2Microλ₯Ό μ μνλ€. λ³Έ μ°κ΅¬λ κ±°μμ ꡬ쑰λ‘λΆν° λ―Έμμ ꡬ쑰 μ 보λ₯Ό μ μΆν μ μλ€λ κ°μ€μ μ μ λ‘ νμ¬, μ΄κΈ°μ ν κ°μ§ μμ κΈ°λ²λ§ νλνλλΌλ μ§λ³ μ§λ¨ λ° μ°κ΅¬μ μ μ΅ν μΆκ°μ μΈ μμ κΈ°λ²μ μμ±νλ€. μ κ²½ μμ μμμμ μ λ‘κ° μλ μ΄ μ κ·Ό λ°©μμ λ©ν°λͺ¨λ¬ μ΄λ―Έμ§ λ²μμ μ΄μ μ νμ©νμ¬ μλΉν μκ°κ³Ό λΉμ©μ μ κ°νλ€. Macro2Microλ 3D T1μ μ
λ ₯μΌλ‘ μ¬μ©νμ¬ 2D T1 μ¬λΌμ΄μ€λ₯Ό μμ±ν λ€μ μ λμ μμ±λ§(GAN)μ ν΅ν΄ μ²λ¦¬λμ΄ 2D DTI (FA) μ¬λΌμ΄μ€μ 2D νΈλν κ·ΈλνΌ μ΄λ―Έμ§λ₯Ό μμ±νλ€. μ΄ νλ‘μΈμ€μ ν΅μ¬ μμλ μ΄λ―Έμ§ νΉμ±μ μ£Όνμ λμμ λ°λΌ λΆλ¦¬νλ μ₯νλΈ ν©μ±κ³±μ μ¬μ©νλ κ²μ΄λ€. μ΄ νλ μμν¬λ μ²μλ
λ μΈμ§ λ°λ¬(ABCD) λ°μ΄ν°λ₯Ό μ¬μ©νμ¬ νλ ¨λμμΌλ©°, μ΄λ―Έμ§ ν½μ
μμ€, μ§κ° μμ€, GAN μμ€ λ° λ μ€μ¬ ν¨μΉ GAN μμ€μ ν΅ν΄ νλ ¨ μμ€μ΄ μ μλμλ€. λ³Έ μ°κ΅¬μ κ²°κ³Όλ μ λμ λ° μ μ±μ μΌλ‘ λ°μ΄λ μ±λ₯μ 보μμΌλ©° λ¨μν μ΄λ―Έμ§ λΆν¬λ§μ νμ΅ν κ²μ΄ μλ λμ ꡬ쑰μ λ° λ―Έμμ ꡬ쑰μ μλ¬Όνμ μΈ νΉμ±κΉμ§ νμ΅νλ€λ μ μμ μμκ° μλ€. μ΄λ―Έμ§ λ³ν λͺ¨λΈμ λ°μ΄ν° μ¦λ λ°©λ²μΌλ‘ μ μ¬μ μΌλ‘ μ μ©νλ©΄ λ°μ΄ν° λΆκ· ν λ° ν¬μμ± λ¬Έμ λ₯Ό ν΄κ²°ν μ μλ€. μ΄ μ°κ΅¬λ λ°μ νλ μ§λ³ λͺ¨λΈλ§μμ λ€μ€ λͺ¨λ μ΄λ―Έμ§, νΉν T1, DTI λ° νΈλν κ·ΈλνΌμ μ‘°ν© μ¬μ©μ μ μ¬λ ₯μ μ μνλ€.The brain consists of the highly localized functions of several brain regions and the integration of these regions through neural connections. These brain neural connections are constantly changing at the systemic and synaptic levels to effectively respond to the ever-changing environment. One of the key factors enabling these dynamic interactions is the structural plasticity of the human brain at the macro and micro scale. Because the brain's macro- and micro-structures convey different but complementary information, considering both structures is critical to understanding the brain's structural plasticity and connectivity during cognitive tasks. However, previous studies have not effectively considered this issue.
In this study, a novel deep learning framework, Macro2Micro, is proposed to generate high-quality Diffusion Tensor Imaging (DTI) and tractography from structural MRI (sMRI). The study is premised on the hypothesis that micro-scale structural information can be inferred from macro-scale structures, enabling the generation of different imaging modalities beneficial for disease diagnosis and research, even when only one modality is initially obtained. This approach, unprecedented in the realm of neuroimaging, leverages the benefits of cross-modality image translation, offering significant time and cost savings. The Macro2Micro framework utilizes 3D T1 to generate 2D T1 slices as input, which are then processed through a Generative Adversarial Network (GAN) to produce 2D DTI (FA) slices and subsequently 2D tractography. The key element of this process is the use of Octave Convolutions, which facilitate the analysis of connections between various scale MR modalities. The framework was trained using the Adolescent Brain Cognitive Development (ABCD) dataset, with training losses evaluated through Image Pixel loss, Perceptual loss, GAN loss, and brain-focused patch GAN loss.
The results not only showed superior performance compared to other algorithms quantitatively and qualitatively but also have significant meaning in neuroscience in that they learned not only the image distribution but also the biological characteristics of the structural and microscopic structures of the brain. The potential application of this image translation model as a data augmentation method could address issues of data imbalance and scarcity. This research underscores the potential of multimodal imaging, specifically the combined use of T1, DTI, and tractography, in advancing disease modeling.Chapter 1. INTRODUCTION 1
Chapter 2. RELATED WORK 7
Chapter 3. METHOD 15
3.1. Architecture Overview
3.2. Octave Convolution
3.3. Networks
3.4. Training Losses
3.5. Image Quality Metrics
3.6. Comparison of generated and real FA images in low-dimensional representation
3.7. Prediction of biological and cognitive variables using predicted FA images
3.8. Prediction of Tractography from FA images
3.9. Experimental Settings
Chapter 4. RESULTS 26
4.1. Qualitative Evaluation
4.2. Quantitative Evaluation
4.3. Generated FA images by Macro2Micro can efficiently predict sex, ADHD, and intelligence
4.4 Ablation Studies
4.5. Effectiveness of Macro2Micro along the distance from the center of the brain
4.6. FA Image Translation to Tractography
Chapter 5. DISCUSSION AND CONCLUSIONS 40
Bibliography 45
κ΅λ¬Έμ΄λ‘ 52μ
A Deep Network for Explainable Prediction of Non-Imaging Phenotypes using Anatomical Multi-View Data
Large datasets often contain multiple distinct feature sets, or views, that
offer complementary information that can be exploited by multi-view learning
methods to improve results. We investigate anatomical multi-view data, where
each brain anatomical structure is described with multiple feature sets. In
particular, we focus on sets of white matter microstructure and connectivity
features from diffusion MRI, as well as sets of gray matter area and thickness
features from structural MRI. We investigate machine learning methodology that
applies multi-view approaches to improve the prediction of non-imaging
phenotypes, including demographics (age), motor (strength), and cognition
(picture vocabulary). We present an explainable multi-view network (EMV-Net)
that can use different anatomical views to improve prediction performance. In
this network, each individual anatomical view is processed by a view-specific
feature extractor and the extracted information from each view is fused using a
learnable weight. This is followed by a wavelet transform-based module to
obtain complementary information across views which is then applied to
calibrate the view-specific information. Additionally, the calibrator produces
an attention-based calibration score to indicate anatomical structures'
importance for interpretation.Comment: 2023 The Medical Image Computing and Computer Assisted Intervention
Society worksho
HA-HI: Synergising fMRI and DTI through Hierarchical Alignments and Hierarchical Interactions for Mild Cognitive Impairment Diagnosis
Early diagnosis of mild cognitive impairment (MCI) and subjective cognitive
decline (SCD) utilizing multi-modal magnetic resonance imaging (MRI) is a
pivotal area of research. While various regional and connectivity features from
functional MRI (fMRI) and diffusion tensor imaging (DTI) have been employed to
develop diagnosis models, most studies integrate these features without
adequately addressing their alignment and interactions. This limits the
potential to fully exploit the synergistic contributions of combined features
and modalities. To solve this gap, our study introduces a novel Hierarchical
Alignments and Hierarchical Interactions (HA-HI) method for MCI and SCD
classification, leveraging the combined strengths of fMRI and DTI. HA-HI
efficiently learns significant MCI- or SCD- related regional and connectivity
features by aligning various feature types and hierarchically maximizing their
interactions. Furthermore, to enhance the interpretability of our approach, we
have developed the Synergistic Activation Map (SAM) technique, revealing the
critical brain regions and connections that are indicative of MCI/SCD.
Comprehensive evaluations on the ADNI dataset and our self-collected data
demonstrate that HA-HI outperforms other existing methods in diagnosing MCI and
SCD, making it a potentially vital and interpretable tool for early detection.
The implementation of this method is publicly accessible at
https://github.com/ICI-BCI/Dual-MRI-HA-HI.git
3D Deep Learning on Medical Images: A Review
The rapid advancements in machine learning, graphics processing technologies
and availability of medical imaging data has led to a rapid increase in use of
deep learning models in the medical domain. This was exacerbated by the rapid
advancements in convolutional neural network (CNN) based architectures, which
were adopted by the medical imaging community to assist clinicians in disease
diagnosis. Since the grand success of AlexNet in 2012, CNNs have been
increasingly used in medical image analysis to improve the efficiency of human
clinicians. In recent years, three-dimensional (3D) CNNs have been employed for
analysis of medical images. In this paper, we trace the history of how the 3D
CNN was developed from its machine learning roots, give a brief mathematical
description of 3D CNN and the preprocessing steps required for medical images
before feeding them to 3D CNNs. We review the significant research in the field
of 3D medical imaging analysis using 3D CNNs (and its variants) in different
medical areas such as classification, segmentation, detection, and
localization. We conclude by discussing the challenges associated with the use
of 3D CNNs in the medical imaging domain (and the use of deep learning models,
in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table
PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation
With the advent of convolutional neural networks~(CNN), supervised learning
methods are increasingly being used for whole brain segmentation. However, a
large, manually annotated training dataset of labeled brain images required to
train such supervised methods is frequently difficult to obtain or create. In
addition, existing training datasets are generally acquired with a homogeneous
magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such
datasets are unable to generalize on test data with different acquisition
protocols. Modern neuroimaging studies and clinical trials are necessarily
multi-center initiatives with a wide variety of acquisition protocols. Despite
stringent protocol harmonization practices, it is very difficult to standardize
the gamut of MRI imaging parameters across scanners, field strengths, receive
coils etc., that affect image contrast. In this paper we propose a CNN-based
segmentation algorithm that, in addition to being highly accurate and fast, is
also resilient to variation in the input acquisition. Our approach relies on
building approximate forward models of pulse sequences that produce a typical
test image. For a given pulse sequence, we use its forward model to generate
plausible, synthetic training examples that appear as if they were acquired in
a scanner with that pulse sequence. Sampling over a wide variety of pulse
sequences results in a wide variety of augmented training examples that help
build an image contrast invariant model. Our method trains a single CNN that
can segment input MRI images with acquisition parameters as disparate as
-weighted and -weighted contrasts with only -weighted training
data. The segmentations generated are highly accurate with state-of-the-art
results~(overall Dice overlap), with a fast run time~( 45
seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev
Brain MRI-to-PET Synthesis using 3D Convolutional Attention Networks
Accurate quantification of cerebral blood flow (CBF) is essential for the
diagnosis and assessment of a wide range of neurological diseases. Positron
emission tomography (PET) with radiolabeled water (15O-water) is considered the
gold-standard for the measurement of CBF in humans. PET imaging, however, is
not widely available because of its prohibitive costs and use of short-lived
radiopharmaceutical tracers that typically require onsite cyclotron production.
Magnetic resonance imaging (MRI), in contrast, is more readily accessible and
does not involve ionizing radiation. This study presents a convolutional
encoder-decoder network with attention mechanisms to predict gold-standard
15O-water PET CBF from multi-sequence MRI scans, thereby eliminating the need
for radioactive tracers. Inputs to the prediction model include several
commonly used MRI sequences (T1-weighted, T2-FLAIR, and arterial spin
labeling). The model was trained and validated using 5-fold cross-validation in
a group of 126 subjects consisting of healthy controls and cerebrovascular
disease patients, all of whom underwent simultaneous $15O-water PET/MRI. The
results show that such a model can successfully synthesize high-quality PET CBF
measurements (with an average SSIM of 0.924 and PSNR of 38.8 dB) and is more
accurate compared to concurrent and previous PET synthesis methods. We also
demonstrate the clinical significance of the proposed algorithm by evaluating
the agreement for identifying the vascular territories with abnormally low CBF.
Such methods may enable more widespread and accurate CBF evaluation in larger
cohorts who cannot undergo PET imaging due to radiation concerns, lack of
access, or logistic challenges.Comment: 19 pages, 14 figure
- β¦