39 research outputs found
Better Generalization of White Matter Tract Segmentation to Arbitrary Datasets with Scaled Residual Bootstrap
White matter (WM) tract segmentation is a crucial step for brain connectivity
studies. It is performed on diffusion magnetic resonance imaging (dMRI), and
deep neural networks (DNNs) have achieved promising segmentation accuracy.
Existing DNN-based methods use an annotated dataset for model training.
However, the performance of the trained model on a different test dataset may
not be optimal due to distribution shift, and it is desirable to design WM
tract segmentation approaches that allow better generalization of the
segmentation model to arbitrary test datasets. In this work, we propose a WM
tract segmentation approach that improves the generalization with scaled
residual bootstrap. The difference between dMRI scans in training and test
datasets is most noticeably caused by the different numbers of diffusion
gradients and noise levels. Since both of them lead to different
signal-to-noise ratios (SNRs) between the training and test data, we propose to
augment the training scans by adjusting the noise magnitude and develop an
adapted residual bootstrap strategy for the augmentation. To validate the
proposed approach, two dMRI datasets were used, and the experimental results
show that our method consistently improved the generalization of WM tract
segmentation under various settings
Positive-unlabeled learning for binary and multi-class cell detection in histopathology images with incomplete annotations
Cell detection in histopathology images is of great interest to clinical
practice and research, and convolutional neural networks (CNNs) have achieved
remarkable cell detection results. Typically, to train CNN-based cell detection
models, every positive instance in the training images needs to be annotated,
and instances that are not labeled as positive are considered negative samples.
However, manual cell annotation is complicated due to the large number and
diversity of cells, and it can be difficult to ensure the annotation of every
positive instance. In many cases, only incomplete annotations are available,
where some of the positive instances are annotated and the others are not, and
the classification loss term for negative samples in typical network training
becomes incorrect. In this work, to address this problem of incomplete
annotations, we propose to reformulate the training of the detection network as
a positive-unlabeled learning problem. Since the instances in unannotated
regions can be either positive or negative, they have unknown labels. Using the
samples with unknown labels and the positively labeled samples, we first derive
an approximation of the classification loss term corresponding to negative
samples for binary cell detection, and based on this approximation we further
extend the proposed framework to multi-class cell detection. For evaluation,
experiments were performed on four publicly available datasets. The
experimental results show that our method improves the performance of cell
detection in histopathology images given incomplete annotations for network
training.Comment: Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2022:027. arXiv admin
note: text overlap with arXiv:2106.1591
Estimation of Fiber Orientations Using Neighborhood Information
Data from diffusion magnetic resonance imaging (dMRI) can be used to
reconstruct fiber tracts, for example, in muscle and white matter. Estimation
of fiber orientations (FOs) is a crucial step in the reconstruction process and
these estimates can be corrupted by noise. In this paper, a new method called
Fiber Orientation Reconstruction using Neighborhood Information (FORNI) is
described and shown to reduce the effects of noise and improve FO estimation
performance by incorporating spatial consistency. FORNI uses a fixed tensor
basis to model the diffusion weighted signals, which has the advantage of
providing an explicit relationship between the basis vectors and the FOs. FO
spatial coherence is encouraged using weighted l1-norm regularization terms,
which contain the interaction of directional information between neighbor
voxels. Data fidelity is encouraged using a squared error between the observed
and reconstructed diffusion weighted signals. After appropriate weighting of
these competing objectives, the resulting objective function is minimized using
a block coordinate descent algorithm, and a straightforward parallelization
strategy is used to speed up processing. Experiments were performed on a
digital crossing phantom, ex vivo tongue dMRI data, and in vivo brain dMRI data
for both qualitative and quantitative evaluation. The results demonstrate that
FORNI improves the quality of FO estimation over other state of the art
algorithms.Comment: Journal paper accepted in Medical Image Analysis. 35 pages and 16
figure
Generating synthetic computed tomography for radiotherapy: SynthRAD2023 challenge report
Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning