31 research outputs found
Recommended from our members
Stimulation and measurement patterns versus prior information for fast 3D EIT: A breast screening case study
Imposing prior information is a typical strategy in inverse problems in return for a stable numerical algorithm. For a given imaging system configuration, Picard's stability condition could be deployed as a practical measure of the performance of the system against various priors and noise contaminated measurements. Herein, we make extensive use of this measure to quantify the performance of impedance imaging systems for various injection patterns. In effect, we numerically demonstrate that by varying electrode distributions and numbers, little improvement, if any, in the performance of the impedance imaging system is recorded. In contrast, by using groups of electrodes in the 3D current injection process, a step increase in performance is obtained. Numerical results on a female breast phantom reveal that the performance measure of the imaging system is 15% for a conventional combination of stimulation and prior information, 61% for groups of electrodes and the same prior and 97% for groups of electrodes and a more accurate prior. Finally, since a smaller number of electrodes is involved in the measurement process, a smaller number of measurements is acquired. However, no compromise in the quality of the reconstructed images is observed
Recommended from our members
Automating the processing of cDNA microarray images
This work is concerned with the development of an automatic image processing tool for DNA microarray images. This paper proposes, implements and tests a new tool for cDNA image analysis. The DNAs are imaged as thousands of circularly shaped objects (spots) on the microarray image and the purpose of this tool is to correctly address their location, segment the pixels belonging to spots and extract the quality features of each spot. Techniques used for addressing, segmentation and feature extraction of spots are described in detail. The results obtained with the proposed tool are systematically compared with conventional cDNA microarray analysis software tools
Recommended from our members
Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images
We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes
Recommended from our members
Improved CTA Coronary Segmentation with a Volume-Specific Intensity Threshold
State-of-the-art CTA imaging equipment has increased increased clinician's ability to make non-invasive diagnoses of coronary heart disease; however, an effective interpretation of the cardiac CTA becomes cumbersome due to large amount of imaged data. Intensity based background suppression is often used to enhance the coronary vasculature but setting a fixed threshold to discriminate coronaries from fatty muscles could be misleading due to non-homogeneous response of contrast medium in CTA volumes. In this work, we propose a volumespecific model of the contrast medium in the coronary segmentation process to improve the segmentation accuracy. The influence of the contrast medium in a CTA volume was modelled by approximating the intensity histogram of the descending aorta with Gaussian approximation. It should be noted that a significant variation in Gaussian mean for 12 CTA volumes validates the need of volume-wise exclusive intensity threshold for accurate coronary segmentation. Moreover, the effectiveness of the adaptive intensity threshold is illustrated with the help of qualitative and quantitative results
Recommended from our members
A Hybrid Energy Model for Region Based Curve Evolution - Application to CTA Coronary Segmentation
Background and Objective: State-of-the-art medical imaging techniques have enabled non-invasive imaging of the internal organs. However, high volumes of imaging data make manual interpretation and delineation of abnormalities cumbersome for clinicians. These challenges have driven intensive research into efficient medical image segmentation. In this work, we propose a hybrid region-based energy formulation for effective segmentation in computed tomography angiography (CTA) imagery.
Methods: The proposed hybrid energy couples an intensity-based local term with an efficient discontinuity-based global model of the image for optimal segmentation. The segmentation is achieved using a level set formulation due to the computational robustness. After validating the statistical significance of the hybrid energy, we applied the proposed model to solve an important clinical problem of 3D coronary segmentation. An improved seed detection method is used to initialize the level set evolution. Moreover, we employed an auto-correction feature that captures the emerging peripheries during the curve evolution for completeness of the coronary tree.
Results: We evaluated the segmentation accuracy of the proposed energy model against the existing techniques in two stages. Qualitative and quantitative results demonstrate the effectiveness of the proposed framework with a consistent mean sensitivity and specificity measures of 80% across the CTA data. Moreover, a high degree of agreement with respect to the inter-observer differences justifies the generalization of the proposed method.
Conclusions: The proposed method is effective to segment the coronary tree from the CTA volume based on hybrid image based energy, which can improve the clinicians ability to detect arterial abnormalities
IoT-Enabled flood severity prediction via ensemble machine learning models
© 2013 IEEE. River flooding is a natural phenomenon that can have a devastating effect on human life and economic losses. There have been various approaches in studying river flooding; however, insufficient understanding and limited knowledge about flooding conditions hinder the development of prevention and control measures for this natural phenomenon. This paper entails a new approach for the prediction of water level in association with flood severity using the ensemble model. Our approach leverages the latest developments in the Internet of Things (IoT) and machine learning for the automated analysis of flood data that might be useful to prevent natural disasters. Research outcomes indicate that ensemble learning provides a more reliable tool to predict flood severity levels. The experimental results indicate that the ensemble learning using the Long-Short Term memory model and random forest outperformed individual models with a sensitivity, specificity and accuracy of 71.4%, 85.9%, 81.13%, respectively
Brain Tumor Segmentation in Fluid-Attenuated Inversion Recovery Brain MRI using Residual Network Deep Learning Architectures
Early and accurate detection of brain tumors is very important to save the patient's life. Brain tumors are generally diagnosed manually by a radiologist by analyzing the patient's brain MRI scans which is a time-consuming process. This led to our study of this research area for finding out a solution to automate the diagnosis to increase its speed and accuracy. In this study, we investigate the use of Residual Network deep learning architecture to diagnose and segment brain tumors. We proposed a two-step method involving a tumor detection stage, using ResNet50 architecture, and a tumor area segmentation stage using ResU-Net architecture. We adopt transfer learning on pre-trained models to help get the best performance out of the approach, as well as data augmentation to lessen the effect of data population imbalance and hyperparameter optimization to get the best set of training parameter values. Using a publicly available dataset as a testbed we show that our approach achieves 84.3 % performance outperforming the state-of-the-art using U-Net by 2% using the Dice Coefficient metric
Analysing the Impact of Global Demographic Characteristics over the COVID-19 Spread Using Class Rule Mining and Pattern Matching
Since the Coronavirus disease (COVID-19) outbreak in December 2019, studies have been addressing diverse aspects in relation to COVID-19 such as potential symptoms and predictive tools. However, limited work has been performed towards the modelling of complex associations between the combined demographic attributes and varying nature of the COVID-19 infections across the globe. This study presents an intelligent approach to investigate the multi-dimensional associations between demographic attributes and COVID-19 global variations. We gather multiple demographic attributes and COVID-19 infection data (by 08 January 2021) from reliable sources, which are then processed by intelligent algorithms to identify the significant associations and patterns within the data. Statistical results and experts’ reports indicate strong associations between COVID-19 severity levels across the globe and certain demographic attributes, e.g., female smokers, when combined together with other attributes. The outcomes will aid the understanding of the dynamics of disease spread and its progression, which in turn may support policy makers, medical specialists and the society, in better understanding and effective management of the disease
IoT-enabled Flood Severity Prediction via Ensemble Machine Learning Models
River flooding is a natural phenomenon that can have a devastating effect on human life and economic losses. There have been various approaches in studying river flooding; however, insufficient understanding and limited knowledge about flooding conditions hinder the development of prevention and control measures for this natural phenomenon. This paper entails a new approach for the prediction of water level in association with flood severity using the ensemble model. Our approach leverages the latest developments in the Internet of Things (IoT) and machine learning for the automated analysis of flood data that might be useful to prevent natural disasters. Research outcomes indicate that ensemble learning provides a more reliable tool to predict flood severity levels. The experimental results indicate that the ensemble learning using the Long-Short Term memory model and random forest outperformed individual models with a sensitivity, specificity and accuracy of 71.4%, 85.9%, 81.13%, respectively
Political Arabic Articles Orientation Using Rough Set Theory with Sentiment Lexicon
Sentiment analysis is an emerging research field that can be integrated with other domains, including data mining, natural language processing and machine learning. In political articles, it is difficult to understand and summarise the state or overall views due to the diversity and size of social media information. A number of studies were conducted in the area of sentiment analysis, especially using English texts, while Arabic language received less attention in the literature. In this study, we propose a detection model for political orientation articles in the Arabic language. We introduce the key assumptions of the model, present and discuss the obtained results, and highlight the issues that still need to be explored to further our understanding of subjective sentences. The main purpose of applying this new approach based on Rough Set (RS) theory is to increase the accuracy of the models in recognizing the orientation of the articles. We present extensive simulation results, which demonstrate the superiority of the proposed model over other algorithms. It is shown that the performance of the proposed approach significantly improves by adding discriminating features. To summarize, the proposed approach demonstrates an accuracy of 85.483%, when evaluating the orientation of political Arabic datasets, compared to 72.58% and 64.516% for the Support Vector Machines and Naïve Bayes methods, respectively