40 research outputs found

    Open-source tool for Airway Segmentation in Computed Tomography using 2.5D Modified EfficientDet: Contribution to the ATM22 Challenge

    Full text link
    Airway segmentation in computed tomography images can be used to analyze pulmonary diseases, however, manual segmentation is labor intensive and relies on expert knowledge. This manuscript details our contribution to MICCAI's 2022 Airway Tree Modelling challenge, a competition of fully automated methods for airway segmentation. We employed a previously developed deep learning architecture based on a modified EfficientDet (MEDSeg), training from scratch for binary airway segmentation using the provided annotations. Our method achieved 90.72 Dice in internal validation, 95.52 Dice on external validation, and 93.49 Dice in the final test phase, while not being specifically designed or tuned for airway segmentation. Open source code and a pip package for predictions with our model and trained weights are in https://github.com/MICLab-Unicamp/medseg.Comment: Open source code, graphical user interface, and a pip package for predictions with our model and trained weights are in https://github.com/MICLab-Unicamp/medse

    Data-driven corpus callosum parcellation method through diffusion tensor imaging

    Get PDF
    The corpus callosum (CC) is a set of neural fibers in the cerebral cortex, responsible for facilitating inter-hemispheric communication. The CC structural characteristics appear as an essential element for studying healthy subjects and patients diagnosed with neurodegenerative diseases. Due to its size, the CC is usually divided into smaller regions, also known as parcellation. Since there are no visible landmarks inside the structure indicating its division, CC parcellation is a challenging task and methods proposed in the literature are geometric or atlas-based. This paper proposed an automatic data-driven CC parcellation method, based on diffusion data extracted from diffusion tensor imaging that uses the Watershed transform. Experiments compared parcellation results of the proposed method with results of three other parcellation methods on a data set containing 150 images. Quantitative comparison using the Dice coefficient showed that the CC parcels given by the proposed method has a mean overlap higher than 0,9 for some parcels and lower than 0,6 for other parcels. Poor overlap results were confirmed by the statistically significant differences obtained for diffusion metrics values in each parcel, when using different parcellation methods. The proposed method was also validated by using the CC tractography and was the only study that proposed a non-geometric approach for the CC parcellation, based only on the diffusion data of each subject analyzed59Advanced signal processing methods in medical imaging2242122432COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESPnão tem2013/07559-

    Automatic DTI-based parcellation of the corpus callosum through the watershed transform

    Get PDF
    Parcellation of the corpus callosum (CC) in the midsagittal cross-section of the brain is of utmost importance for the study of diffusion properties within this structure. The complexity of this operation comes from the absence of macroscopic anatomical landmarks to help in dividing the CC into different callosal areas. In this paper we propose a completely automatic method for CC parcellation using diffusion tensor imaging (DTI). A dataset of 15 diffusion MRI volumes from normal subjects was used. For each subject, the midsagital slice was automatically detected based on the Fractional Anisotropy (FA) map. Then, segmentation of the CC in the midsgital slice was performed using the hierarchical watershed transform over a weighted FA-map. Finally, parcellation of the CC was obtained through the application of the watershed transform from chosen markers. Parcellation results obtained were consistent for fourteen of the fifteen subjects tested. Results were similar to the ones obtained from tractography-based methods. Tractography confirmed that the cortical regions associated with each obtained CC region were consistent with the literature. A completely automatic DTI-based parcellation method for the CC was designed and presented. It is not based on tractography, which makes it fast and computationally inexpensive. While most of the existing methods for parcellation of the CC determine an average behavior for the subjects based on population studies, the proposed method reflects the diffusion properties specific for each subject. Parcellation boundaries are found based on the diffusion properties within each individual CC, which makes it more reliable and less affected by differences in size and shape among subjects302132143CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQCOORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESPnão temnão temnão te

    Web-based Platform For Collaborative Medical Imaging Research

    Get PDF
    Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.941

    Watershed-based Segmentation of the Midsagittal Section of the Corpus Callosum in Diffusion MRI

    Get PDF
    Abstract -The corpus callosum (CC) is one of the most important white matter structures of the brain, interconnecting the two cerebral hemispheres. The CC is related to several diseases including dyslexia, autism, multiple sclerosis and lupus, which make its study even more important. We propose here a new approach for fully automatic segmentation of the midsagittal section of CC in magnetic resonance diffusion tensor images, including the automatic determination of the midsagittal slice of the brain . It uses the watershed transform and is performed on the fractional anisotropy map weighted by the projection of the principal eigenvector in the left-right direction. Experiments with real diffusion MRI data showed that the proposed method is able to quickly segment the CC and to the determinate the midsagittal slice without any user intervention. Since it is simple, fast a nd does not require parameter settings, the proposed method is well suited for clinical applications

    Spectro-ViT: A Vision Transformer Model for GABA-edited MRS Reconstruction Using Spectrograms

    Full text link
    Purpose: To investigate the use of a Vision Transformer (ViT) to reconstruct/denoise GABA-edited magnetic resonance spectroscopy (MRS) from a quarter of the typically acquired number of transients using spectrograms. Theory and Methods: A quarter of the typically acquired number of transients collected in GABA-edited MRS scans are pre-processed and converted to a spectrogram image representation using the Short-Time Fourier Transform (STFT). The image representation of the data allows the adaptation of a pre-trained ViT for reconstructing GABA-edited MRS spectra (Spectro-ViT). The Spectro-ViT is fine-tuned and then tested using \textit{in vivo} GABA-edited MRS data. The Spectro-ViT performance is compared against other models in the literature using spectral quality metrics and estimated metabolite concentration values. Results: The Spectro-ViT model significantly outperformed all other models in four out of five quantitative metrics (mean squared error, shape score, GABA+/water fit error, and full width at half maximum). The metabolite concentrations estimated (GABA+/water, GABA+/Cr, and Glx/water) were consistent with the metabolite concentrations estimated using typical GABA-edited MRS scans reconstructed with the full amount of typically collected transients. Conclusion: The proposed Spectro-ViT model achieved state-of-the-art results in reconstructing GABA-edited MRS, and the results indicate these scans could be up to four times faster

    Advancing GABA-edited MRS Research through a Reconstruction Challenge

    Get PDF
    Purpose To create a benchmark for the comparison of machine learning-based Gamma-Aminobutyric Acid (GABA)-edited Magnetic Resonance Spectroscopy (MRS) reconstruction models using one quarter of the transients typically acquired during a complete scan.Methods The Edited-MRS reconstruction challenge had three tracks with the purpose of evaluating machine learning models trained to reconstruct simulated (Track 1), homogeneous in vivo (Track 2), and heterogeneous in vivo (Track 3) GABA-edited MRS data. Four quantitative metrics were used to evaluate the results: mean squared error (MSE), signal-to-noise ratio (SNR), linewidth, and a shape score metric that we proposed. Challenge participants were given three months to create, train and submit their models. Challenge organizers provided open access to a baseline U-NET model for initial comparison, as well as simulated data, in vivo data, and tutorials and guides for adding synthetic noise to the simulations.Results The most successful approach for Track 1 simulated data was a covariance matrix convolutional neural network model, while for Track 2 and Track 3 in vivo data, a vision transformer model operating on a spectrogram representation of the data achieved the most success. Deep learning (DL) based reconstructions with reduced transients achieved equivalent or better SNR, linewidth and fit error as conventional reconstructions with the full amount of transients. However, some DL models also showed the ability to optimize the linewidth and SNR values without actually improving overall spectral quality, pointing to the need for more robust metrics.Conclusion The edited-MRS reconstruction challenge showed that the top performing DL based edited-MRS reconstruction pipelines can obtain with a reduced number of transients equivalent metrics to conventional reconstruction pipelines using the full amount of transients. The proposed metric shape score was positively correlated with challenge track outcome indicating that it is well-suited to evaluate spectral quality.Competing Interest StatementThe authors have declared no competing interest

    Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge

    Get PDF
    Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation
    corecore