2,256 research outputs found
Advanced Brain Tumour Segmentation from MRI Images
Magnetic resonance imaging (MRI) is widely used medical technology for diagnosis of various tissue abnormalities, detection of tumors. The active development in the computerized medical image segmentation has played a vital role in scientific research. This helps the doctors to take necessary treatment in an easy manner with fast decision making. Brain tumor segmentation is a hot point in the research field of Information technology with biomedical engineering. The brain tumor segmentation is motivated by assessing tumor growth, treatment responses, computer-based surgery, treatment of radiation therapy, and developing tumor growth models. Therefore, computer-aided diagnostic system is meaningful in medical treatments to reducing the workload of doctors and giving the accurate results. This chapter explains the causes, awareness of brain tumor segmentation and its classification, MRI scanning process and its operation, brain tumor classifications, and different segmentation methodologies
Are we using appropriate segmentation metrics? Identifying correlates of human expert perception for CNN training beyond rolling the DICE coefficient
In this study, we explore quantitative correlates of qualitative human expert perception. We discover that current quality metrics and loss functions, considered for biomedical image segmentation tasks, correlate moderately with segmentation quality assessment by experts, especially for small yet clinically relevant structures, such as the enhancing tumor in brain glioma. We propose a method employing classical statistics and experimental psychology to create complementary compound loss functions for modern deep learning methods, towards achieving a better fit with human quality assessment. When training a CNN for delineating adult brain tumor in MR images, all four proposed loss candidates outperform the established baselines on the clinically important and hardest to segment enhancing tumor label, while maintaining performance for other label channels
Case Studies on X-Ray Imaging, MRI and Nuclear Imaging
The field of medical imaging is an essential aspect of the medical sciences,
involving various forms of radiation to capture images of the internal tissues
and organs of the body. These images provide vital information for clinical
diagnosis, and in this chapter, we will explore the use of X-ray, MRI, and
nuclear imaging in detecting severe illnesses. However, manual evaluation and
storage of these images can be a challenging and time-consuming process. To
address this issue, artificial intelligence (AI)-based techniques, particularly
deep learning (DL), have become increasingly popular for systematic feature
extraction and classification from imaging modalities, thereby aiding doctors
in making rapid and accurate diagnoses. In this review study, we will focus on
how AI-based approaches, particularly the use of Convolutional Neural Networks
(CNN), can assist in disease detection through medical imaging technology. CNN
is a commonly used approach for image analysis due to its ability to extract
features from raw input images, and as such, will be the primary area of
discussion in this study. Therefore, we have considered CNN as our discussion
area in this study to diagnose ailments using medical imaging technology.Comment: 14 pages, 3 figures, 4 tables; Acceptance of the chapter for the
Springer book "Data-driven approaches to medical imaging
STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Large-scale models pre-trained on large-scale datasets have profoundly
advanced the development of deep learning. However, the state-of-the-art models
for medical image segmentation are still small-scale, with their parameters
only in the tens of millions. Further scaling them up to higher orders of
magnitude is rarely explored. An overarching goal of exploring large-scale
models is to train them on large-scale medical segmentation datasets for better
transfer capacities. In this work, we design a series of Scalable and
Transferable U-Net (STU-Net) models, with parameter sizes ranging from 14
million to 1.4 billion. Notably, the 1.4B STU-Net is the largest medical image
segmentation model to date. Our STU-Net is based on nnU-Net framework due to
its popularity and impressive performance. We first refine the default
convolutional blocks in nnU-Net to make them scalable. Then, we empirically
evaluate different scaling combinations of network depth and width, discovering
that it is optimal to scale model depth and width together. We train our
scalable STU-Net models on a large-scale TotalSegmentator dataset and find that
increasing model size brings a stronger performance gain. This observation
reveals that a large model is promising in medical image segmentation.
Furthermore, we evaluate the transferability of our model on 14 downstream
datasets for direct inference and 3 datasets for further fine-tuning, covering
various modalities and segmentation targets. We observe good performance of our
pre-trained model in both direct inference and fine-tuning. The code and
pre-trained models are available at https://github.com/Ziyan-Huang/STU-Net
- …