25 research outputs found
Brain Tumor Segmentation on MRI Brain Images with Fuzzy Clustering and GVF Snake Model
Deformable or snake models are extensively used for medical image segmentation, particularly to locate tumor boundaries in brain tumor MRI images. Problems associated with initialization and poor convergence to boundary concavities, however, has limited their usefulness. As result of that they tend to be attracted towards wrong image features. In this paper, we propose a method that combine region based fuzzy clustering called Enhanced Possibilistic Fuzzy C-Means (EPFCM) and Gradient vector flow (GVF) snake model for segmenting tumor region on MRI images. Region based fuzzy clustering is used for initial segmentation of tumor then result of this is used to provide initial contour for GVF snake model, which then determines the final contour for exact tumor boundary for final segmentation. The evaluation result with tumor MRI images shows that our method is more accurate and robust for brain tumor segmentation
Level-set segmentation of brain tumors in magnetic resonance images
Master'sMASTER OF ENGINEERIN
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
Fusion of Higher Order Spectra and Texture Extraction Methods for Automated Stroke Severity Classification with MRI Images
This paper presents a scientific foundation for automated stroke severity classification. We have constructed and assessed a system which extracts diagnostically relevant information from Magnetic Resonance Imaging (MRI) images. The design was based on 267 images that show the brain from individual subjects after stroke. They were labeled as either Lacunar Syndrome (LACS), Partial Anterior Circulation Syndrome (PACS), or Total Anterior Circulation Stroke (TACS). The labels indicate different physiological processes which manifest themselves in distinct image texture. The processing system was tasked with extracting texture information that could be used to classify a brain MRI image from a stroke survivor into either LACS, PACS, or TACS. We analyzed 6475 features that were obtained with Gray-Level Run Length Matrix (GLRLM), Higher Order Spectra (HOS), as well as a combination of Discrete Wavelet Transform (DWT) and Gray-Level Co-occurrence Matrix (GLCM) methods. The resulting features were ranked based on the p-value extracted with the Analysis Of Variance (ANOVA) algorithm. The ranked features were used to train and test four types of Support Vector Machine (SVM) classification algorithms according to the rules of 10-fold cross-validation. We found that SVM with Radial Basis Function (RBF) kernel achieves: Accuracy (ACC) = 93.62%, Specificity (SPE) = 95.91%, Sensitivity (SEN) = 92.44%, and Dice-score = 0.95. These results indicate that computer aided stroke severity diagnosis support is possible. Such systems might lead to progress in stroke diagnosis by enabling healthcare professionals to improve diagnosis and management of stroke patients with the same resources
Segmentation d'images par étiquetage crédibiliste : Application à l'imagerie médicale par tomodensitométrie en cancérologie
In this paper, an image segmentation algorithm based on credal labelling is presented. The main contribution of this
work lies in the way in which the images are modelled by belief functions in order to represented uncertainty inherent in
the labelling of a voxel to a class. For each voxel, the basic belief assignment is derived from intrinsic features of the
regions in the image. In order to control the uncertainty in the labelling step, a decision threshold is decreased in a
progressive way throughout an iterative process until its stabilization. The methodology is applied for volumes segmentation on computed tomography images. The segmentation of the two lungs, trachea, main bronchi and the spinal
canal is carried out for patients having undergone external radiotherapy treatment. The segmentation of a pathological
ganglion is also presented and is used for volume measurement in case of therapeutic follow-up.Dans cet article, un algorithme de segmentation d'images basé sur une technique d'étiquetage crédibiliste est
présenté. La contribution essentielle de ce travail réside dans la façon dont les images sont modélisées par
des fonctions de croyance de façon à représenter l'incertitude inhérente à l'étiquetage d'un voxel à une
classe. L'allocation de masse réalisée pour chaque voxel est construite à partir des caractéristiques
intrinsèques des régions qui composent l'image. Afin de limiter cette incertitude dans la phase d'étiquetage,
on diminue de façon progressive un seuil de décision tout au long d'un processus itératif jusqu'à
sa stabilisation. Cet algorithme est appliqué à la segmentation de volumes d'intérêt sur des images TDM. La
segmentation des deux poumons, de la trachée, des bronches souches et du canal médullaire est réalisée à
visée de radiothérapie externe, ainsi que la réalisation de la segmentation ganglionnaire pour la mesure de
volume à visée d'évaluation initiale du stade de la maladie ou de suivi thérapeutique
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments