676 research outputs found
Interactive Medical Image Segmentation using Deep Learning with Image-specific Fine-tuning
Convolutional neural networks (CNNs) have achieved state-of-the-art
performance for automatic medical image segmentation. However, they have not
demonstrated sufficiently accurate and robust results for clinical use. In
addition, they are limited by the lack of image-specific adaptation and the
lack of generalizability to previously unseen object classes. To address these
problems, we propose a novel deep learning-based framework for interactive
segmentation by incorporating CNNs into a bounding box and scribble-based
segmentation pipeline. We propose image-specific fine-tuning to make a CNN
model adaptive to a specific test image, which can be either unsupervised
(without additional user interactions) or supervised (with additional
scribbles). We also propose a weighted loss function considering network and
interaction-based uncertainty for the fine-tuning. We applied this framework to
two applications: 2D segmentation of multiple organs from fetal MR slices,
where only two types of these organs were annotated for training; and 3D
segmentation of brain tumor core (excluding edema) and whole brain tumor
(including edema) from different MR sequences, where only tumor cores in one MR
sequence were annotated for training. Experimental results show that 1) our
model is more robust to segment previously unseen objects than state-of-the-art
CNNs; 2) image-specific fine-tuning with the proposed weighted loss function
significantly improves segmentation accuracy; and 3) our method leads to
accurate results with fewer user interactions and less user time than
traditional interactive segmentation methods.Comment: 11 pages, 11 figure
Social-Group-Optimization based tumor evaluation tool for clinical brain MRI of Flair/diffusion-weighted modality
Brain tumor is one of the harsh diseases among human community and is usually diagnosed with medical imaging procedures. Computed-Tomography (CT) and Magnetic-Resonance-Image (MRI) are the regularly used non-invasive methods to acquire brain abnormalities for medical study. Due to its importance, a significant quantity of image assessment and decision-making procedures exist in literature. This article proposes a two-stage image assessment tool to examine brain MR images acquired using the Flair and DW modalities. The combination of the Social-Group-Optimization (SGO) and Shannon's-Entropy (SE) supported multi-thresholding is implemented to pre-processing the input images. The image post-processing includes several procedures, such as Active Contour (AC), Watershed and region-growing segmentation, to extract the tumor section. Finally, a classifier system is implemented using ANFIS to categorize the tumor under analysis into benign and malignant. Experimental investigation was executed using benchmark datasets, like ISLES and BRATS, and also clinical MR images obtained with Flair/DW modality. The outcome of this study confirms that AC offers enhanced results compared with other segmentation procedures considered in this article. The ANFIS classifier obtained an accuracy of 94.51% on the used ISLES and real clinical images. (C) 2019 Nalecz Institute of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences
Segmentation Of Ultisequence Medical Images Using Random Walks Algorithm And Rough Sets Theory
Segmentasi imej Magnetic Resonance (MR) merupakan satu tugas klinikal yang mencabar. Selalunya, satu jenis imej MR tidak mencukupi untuk memberikan maklumat yang lengkap mengenai sesuatu tisu patologi atau objek visual dari imej
Accurate Magnetic Resonance (MR) image segmentation is a clinically challenging task. More often than not, one type of MRI image is insufficient to provide the complete information about a pathological tissue or a visual object from the imag
Automatic analysis of medical images for change detection in prostate cancer
Prostate cancer is the most common cancer and second most common cause of cancer death in men in the UK. However, the patient risk from the cancer can vary considerably, and the widespread use of prostate-specific antigen (PSA) screening has led to over-diagnosis and over-treatment of low-grade tumours. It is therefore important to be able to differentiate high-grade prostate cancer from the slowly- growing, low-grade cancer. Many of these men with low-grade cancer are placed on active surveillance (AS), which involves constant monitoring and intervention for risk reclassification, relying increasingly on magnetic resonance imaging (MRI) to detect disease progression, in addition to TRUS-guided biopsies which are the routine clinical standard method to use. This results in a need for new tools to process these images. For this purpose, it is important to have a good TRUS-MR registration so corresponding anatomy can be located accurately between the two. Automatic segmentation of the prostate gland on both modalities reduces some of the challenges of the registration, such as patient motion, tissue deformation, and the time of the procedure. This thesis focuses on the use of deep learning methods, specifically convolutional neural networks (CNNs), for prostate cancer management. Chapters 4 and 5 investigated the use of CNNs for both TRUS and MRI prostate gland segmentation, and reported high segmentation accuracies for both, Dice Score Coefficients (DSC) of 0.89 for TRUS segmentations and DSCs between 0.84-0.89 for MRI prostate gland segmentation using a range of networks. Chapter 5 also investigated the impact of these segmentation scores on more clinically relevant measures, such as MRI-TRUS registration errors and volume measures, showing that a statistically significant difference in DSCs did not lead to a statistically significant difference in the clinical measures using these segmentations. The potential of these algorithms in commercial and clinical systems are summarised and the use of the MRI prostate gland segmentation in the application of radiological prostate cancer progression prediction for AS patients are investigated and discussed in Chapter 8, which shows statistically significant improvements in accuracy when using spatial priors in the form of prostate segmentations (0.63 ± 0.16 vs. 0.82 ± 0.18 when comparing whole prostate MRI vs. only prostate gland region, respectively)
Microscope Embedded Neurosurgical Training and Intraoperative System
In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results.
In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point.
The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery.
This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient.
The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability.
Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery.
All the components are open source or at least based on a GPL license
Brain Tumor Detection Based on a Novel and High-Quality Prediction of the Tumor Pixel Distributions
In this paper, we propose a system to detect brain tumor in 3D MRI brain
scans of Flair modality. It performs 2 functions: (a) predicting gray-level and
locational distributions of the pixels in the tumor regions and (b) generating
tumor mask in pixel-wise precision. To facilitate 3D data analysis and
processing, we introduced a 2D histogram presentation that comprehends the
gray-level distribution and pixel-location distribution of a 3D object. In the
proposed system, particular 2D histograms, in which tumor-related feature data
get concentrated, are established by exploiting the left-right asymmetry of a
brain structure. A modulation function is generated from the input data of each
patient case and applied to the 2D histograms to attenuate the element
irrelevant to the tumor regions. The prediction of the tumor pixel distribution
is done in 3 steps, on the axial, coronal and sagittal slice series,
respectively. In each step, the prediction result helps to identify/remove
tumor-free slices, increasing the tumor information density in the remaining
data to be applied to the next step. After the 3-step removal, the 3D input is
reduced to a minimum bounding box of the tumor region. It is used to finalize
the prediction and then transformed into a 3D tumor mask, by means of gray
level thresholding and low-pass-based morphological operations. The final
prediction result is used to determine the critical threshold. The proposed
system has been tested extensively with the data of more than one thousand
patient cases in the datasets of BraTS 2018~21. The test results demonstrate
that the predicted 2D histograms have a high degree of similarity with the true
ones. The system delivers also very good tumor detection results, comparable to
those of state-of-the-art CNN systems with mono-modality inputs, which is
achieved at an extremely low computation cost and no need for training
- …