56 research outputs found

    The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?

    Get PDF
    This book is a reprint of the Special Issue entitled "The Artificial Intelligence in Digital Pathology and Digital Radiology: Where Are We?". Artificial intelligence is extending into the world of both digital radiology and digital pathology, and involves many scholars in the areas of biomedicine, technology, and bioethics. There is a particular need for scholars to focus on both the innovations in this field and the problems hampering integration into a robust and effective process in stable health care models in the health domain. Many professionals involved in these fields of digital health were encouraged to contribute with their experiences. This book contains contributions from various experts across different fields. Aspects of the integration in the health domain have been faced. Particular space was dedicated to overviewing the challenges, opportunities, and problems in both radiology and pathology. Clinal deepens are available in cardiology, the hystopathology of breast cancer, and colonoscopy. Dedicated studies were based on surveys which investigated students and insiders, opinions, attitudes, and self-perception on the integration of artificial intelligence in this field

    Strategies for neural networks in ballistocardiography with a view towards hardware implementation

    Get PDF
    A thesis submitted for the degree of Doctor of Philosophy at the University of LutonThe work described in this thesis is based on the results of a clinical trial conducted by the research team at the Medical Informatics Unit of the University of Cambridge, which show that the Ballistocardiogram (BCG) has prognostic value in detecting impaired left ventricular function before it becomes clinically overt as myocardial infarction leading to sudden death. The objective of this study is to develop and demonstrate a framework for realising an on-line BCG signal classification model in a portable device that would have the potential to find pathological signs as early as possible for home health care. Two new on-line automatic BeG classification models for time domain BeG classification are proposed. Both systems are based on a two stage process: input feature extraction followed by a neural classifier. One system uses a principal component analysis neural network, and the other a discrete wavelet transform, to reduce the input dimensionality. Results of the classification, dimensionality reduction, and comparison are presented. It is indicated that the combined wavelet transform and MLP system has a more reliable performance than the combined neural networks system, in situations where the data available to determine the network parameters is limited. Moreover, the wavelet transfonn requires no prior knowledge of the statistical distribution of data samples and the computation complexity and training time are reduced. Overall, a methodology for realising an automatic BeG classification system for a portable instrument is presented. A fully paralJel neural network design for a low cost platform using field programmable gate arrays (Xilinx's XC4000 series) is explored. This addresses the potential speed requirements in the biomedical signal processing field. It also demonstrates a flexible hardware design approach so that an instrument's parameters can be updated as data expands with time. To reduce the hardware design complexity and to increase the system performance, a hybrid learning algorithm using random optimisation and the backpropagation rule is developed to achieve an efficient weight update mechanism in low weight precision learning. The simulation results show that the hybrid learning algorithm is effective in solving the network paralysis problem and the convergence is much faster than by the standard backpropagation rule. The hidden and output layer nodes have been mapped on Xilinx FPGAs with automatic placement and routing tools. The static time analysis results suggests that the proposed network implementation could generate 2.7 billion connections per second performance

    Semi-automated techniques for the retrieval of dermatological condition in color skin images

    Get PDF
    Dermatologists base the diagnosis of skin disease on the visual assessment of the skin. This fact shows that correct diagnosis is highly dependent on the observer\u27s experience and on his or her visual perception. Moreover, the human vision system lacks accuracy, reproducibility, and quantification in the way it gathers information from an image. So, there is a great need for computer-aided diagnosis. We propose a content-based image retrieval (CBIR) system to aid in the diagnosis of skin disease. First, after examining the skin images, pre-processing will be performed. Second, we examine the visual features for skin disease classified in the database and select color, texture and shape for characterization of a certain skin disease. Third, feature extraction techniques for each visual feature are investigated respectively. Fourth, similarity measures based on the extracted features will be discussed. Last, after discussing single feature performance, a distance metric combination scheme will be explored. The experimental data set is divided into two parts: developmental data set used as an image library and an unlabeled independent test data set. Two sets of experiments are performed: the input image of the skin image retrieval algorithm is either from developmental data set or independent test data set. The results are top five candidates of the input query image, that is, five labeled images from image library. Results are laid out separately for developmental data set and independent test data set. Two evaluation systems, both the standard precision vs. recall method, and the self-developed scoring method are carried out. The evaluation results obtained by both methods are given for each class of disease. Among all visual features, we found the color feature played a dominating role in distinguishing different types of skin disease. Among all classes of images, the class with best feature consistency gained the best retrieval accuracy based on the evaluation result. For future research we recommend further work in image collection protocol, color balancing, combining the feature metrics, improving texture characterization and incorporating semantic assistance in the retrieved process

    A Pharmaceutical Paradigm for Cardiovascular Composite Risk Assessment Using Novel Radiogenomics Risk Predictors in Precision Explainable Artificial Intelligence Framework: Clinical Trial Tool

    Get PDF
    Background: Cardiovascular disease (CVD) is challenging to diagnose and treat since symptoms appear late during the progression of atherosclerosis. Conventional risk factors alone are not always sufficient to properly categorize at-risk patients, and clinical risk scores are inadequate in predicting cardiac events. Integrating genomic-based biomarkers (GBBM) found in plasma/serum samples with novel non-invasive radiomics-based biomarkers (RBBM) such as plaque area, plaque burden, and maximum plaque height can improve composite CVD risk prediction in the pharmaceutical paradigm. These biomarkers consider several pathways involved in the pathophysiology of atherosclerosis disease leading to CVD. Objective: This review proposes two hypotheses: (i) The composite biomarkers are strongly correlated and can be used to detect the severity of CVD/Stroke precisely, and (ii) an explainable artificial intelligence (XAI)-based composite risk CVD/Stroke model with survival analysis using deep learning (DL) can predict in preventive, precision, and personalized (aiP 3 ) framework benefiting the pharmaceutical paradigm. Method: The PRISMA search technique resulted in 214 studies assessing composite biomarkers using radiogenomics for CVD/Stroke. The study presents a XAI model using AtheroEdge TM 4.0 to determine the risk of CVD/Stroke in the pharmaceutical framework using the radiogenomics biomarkers. Conclusions: Our observations suggest that the composite CVD risk biomarkers using radiogenomics provide a new dimension to CVD/Stroke risk assessment. The proposed review suggests a unique, unbiased, and XAI model based on AtheroEdge TM 4.0 that can predict the composite risk of CVD/Stroke using radiogenomics in the pharmaceutical paradigm

    A Pharmaceutical Paradigm for Cardiovascular Composite Risk Assessment Using Novel Radiogenomics Risk Predictors in Precision Explainable Artificial Intelligence Framework: Clinical Trial Tool

    Get PDF
    Cardiovascular disease (CVD) is challenging to diagnose and treat since symptoms appear late during the progression of atherosclerosis. Conventional risk factors alone are not always sufficient to properly categorize at-risk patients, and clinical risk scores are inadequate in predicting cardiac events. Integrating genomic-based biomarkers (GBBM) found in plasma/serum samples with novel non-invasive radiomics-based biomarkers (RBBM) such as plaque area, plaque burden, and maximum plaque height can improve composite CVD risk prediction in the pharmaceutical paradigm. These biomarkers consider several pathways involved in the pathophysiology of atherosclerosis disease leading to CVD.This review proposes two hypotheses: (i) The composite biomarkers are strongly correlated and can be used to detect the severity of CVD/Stroke precisely, and (ii) an explainable artificial intelligence (XAI)-based composite risk CVD/Stroke model with survival analysis using deep learning (DL) can predict in preventive, precision, and personalized (aiP3) framework benefiting the pharmaceutical paradigm.The PRISMA search technique resulted in 214 studies assessing composite biomarkers using radiogenomics for CVD/Stroke. The study presents a XAI model using AtheroEdgeTM 4.0 to determine the risk of CVD/Stroke in the pharmaceutical framework using the radiogenomics biomarkers.Our observations suggest that the composite CVD risk biomarkers using radiogenomics provide a new dimension to CVD/Stroke risk assessment. The proposed review suggests a unique, unbiased, and XAI model based on AtheroEdgeTM 4.0 that can predict the composite risk of CVD/Stroke using radiogenomics in the pharmaceutical paradigm

    Retinal vessel segmentation using textons

    Get PDF
    Segmenting vessels from retinal images, like segmentation in many other medical image domains, is a challenging task, as there is no unified way that can be adopted to extract the vessels accurately. However, it is the most critical stage in automatic assessment of various forms of diseases (e.g. Glaucoma, Age-related macular degeneration, diabetic retinopathy and cardiovascular diseases etc.). Our research aims to investigate retinal image segmentation approaches based on textons as they provide a compact description of texture that can be learnt from a training set. This thesis presents a brief review of those diseases and also includes their current situations, future trends and techniques used for their automatic diagnosis in routine clinical applications. The importance of retinal vessel segmentation is particularly emphasized in such applications. An extensive review of previous work on retinal vessel segmentation and salient texture analysis methods is presented. Five automatic retinal vessel segmentation methods are proposed in this thesis. The first method focuses on addressing the problem of removing pathological anomalies (Drusen, exudates) for retinal vessel segmentation, which have been identified by other researchers as a problem and a common source of error. The results show that the modified method shows some improvement compared to a previously published method. The second novel supervised segmentation method employs textons. We propose a new filter bank (MR11) that includes bar detectors for vascular feature extraction and other kernels to detect edges and photometric variations in the image. The k-means clustering algorithm is adopted for texton generation based on the vessel and non-vessel elements which are identified by ground truth. The third improved supervised method is developed based on the second one, in which textons are generated by k-means clustering and texton maps representing vessels are derived by back projecting pixel clusters onto hand labelled ground truth. A further step is implemented to ensure that the best combinations of textons are represented in the map and subsequently used to identify vessels in the test set. The experimental results on two benchmark datasets show that our proposed method performs well compared to other published work and the results of human experts. A further test of our system on an independent set of optical fundus images verified its consistent performance. The statistical analysis on experimental results also reveals that it is possible to train unified textons for retinal vessel segmentation. In the fourth method a novel scheme using Gabor filter bank for vessel feature extraction is proposed. The ii method is inspired by the human visual system. Machine learning is used to optimize the Gabor filter parameters. The experimental results demonstrate that our method significantly enhances the true positive rate while maintaining a level of specificity that is comparable with other approaches. Finally, we proposed a new unsupervised texton based retinal vessel segmentation method using derivative of SIFT and multi-scale Gabor filers. The lack of sufficient quantities of hand labelled ground truth and the high level of variability in ground truth labels amongst experts provides the motivation for this approach. The evaluation results reveal that our unsupervised segmentation method is comparable with the best other supervised methods and other best state of the art methods

    Measurement and analysis of breath sounds

    Get PDF
    Existing breath sound measurement systems and possible new methods have been critically investigated. The frequency response of each part of the measurement system has been studied. Emphasis has been placed on frequency response of acoustic sensors; especially, a method to study a diaphragm type air-coupler in contact use has been proposed. Two new methods of breath sounds measurement have been studied: laser Doppler vibrometer and mobile phones. It has been shown that these two methods can find applications in breath sounds measurement, however there are some restrictions. A reliable automatic wheeze detection algorithm based on auditory modelling has been developed. That is the human’s auditory system is modelled as a bank of band pass filters, in which the bandwidths are frequency dependent. Wheezes are treated as signals additive to normal breath sounds (masker). Thus wheeze is detectable when it is above the masking threshold. This new algorithm has been validated using simulated and real data. It is superior to previous algorithms, being more reliable to detect wheezes and less prone to mistakes. Simulation of cardiorespiratory sounds and wheeze audibility tests have been developed. Simulated breath sounds can be used as a training tool, as well as an evaluation method. These simulations have shown that, under certain circumstance, there are wheezes but they are inaudible. It is postulated that this could also happen in real measurements. It has been shown that simulated sounds with predefined characteristics can be used as an objective method to evaluate automatic algorithms. Finally, the efficiency and necessity of heart sounds reduction procedures has been investigated. Based on wavelet decomposition and selective synthesis, heart sounds can be reduced with a cost of unnatural breath sounds. Heart sound reduction is shown not to be necessary if a time-frequency representation is used, as heart sounds have a fixed pattern in the time-frequency plane

    Segmentation, Super-resolution and Fusion for Digital Mammogram Classification

    Get PDF
    Mammography is one of the most common and effective techniques used by radiologists for the early detection of breast cancer. Recently, computer-aided detection/diagnosis (CAD) has become a major research topic in medical imaging and has been widely applied in clinical situations. According to statics, early detection of cancer can reduce the mortality rates by 30% to 70%, therefore detection and diagnosis in the early stage are very important. CAD systems are designed primarily to assist radiologists in detecting and classifying abnormalities in medical scan images, but the main challenges hindering their wider deployment is the difficulty in achieving accuracy rates that help improve radiologists’ performance. The detection and diagnosis of breast cancer face two main issues: the accuracy of the CAD system, and the radiologists’ performance in reading and diagnosing mammograms. This thesis focused on the accuracy of CAD systems. In particular, we investigated two main steps of CAD systems; pre-processing (enhancement and segmentation), feature extraction and classification. Through this investigation, we make five main contributions to the field of automatic mammogram analysis. In automated mammogram analysis, image segmentation techniques are employed in breast boundary or region-of-interest (ROI) extraction. In most Medio-Lateral Oblique (MLO) views of mammograms, the pectoral muscle represents a predominant density region and it is important to detect and segment out this muscle region during pre-processing because it could be bias to the detection of breast cancer. An important reason for the breast border extraction is that it will limit the search-zone for abnormalities in the region of the breast without undue influence from the background of the mammogram. Therefore, we propose a new scheme for breast border extraction, artifact removal and removal of annotations, which are found in the background of mammograms. This was achieved using an local adaptive threshold that creates a binary mask for the images, followed by the use of morphological operations. Furthermore, an adaptive algorithm is proposed to detect and remove the pectoral muscle automatically. Feature extraction is another important step of any image-based pattern classification system. The performance of the corresponding classification depends very much on how well the extracted features represent the object of interest. We investigated a range of different texture feature sets such as Local Binary Pattern Histogram (LBPH), Histogram of Oriented Gradients (HOG) descriptor, and Gray Level Co-occurrence Matrix (GLCM). We propose the use of multi-scale features based on wavelet and local binary patterns for mammogram classification. We extract histograms of LBP codes from the original image as well as the wavelet sub-bands. Extracted features are combined into a single feature set. Experimental results show that our proposed method of combining LBPH features obtained from the original image and with LBPH features obtained from the wavelet domain increase the classification accuracy (sensitivity and specificity) when compared with LBPH extracted from the original image. The feature vector size could be large for some types of feature extraction schemes and they may contain redundant features that could have a negative effect on the performance of classification accuracy. Therefore, feature vector size reduction is needed to achieve higher accuracy as well as efficiency (processing and storage). We reduced the size of the features by applying principle component analysis (PCA) on the feature set and only chose a small number of eigen components to represent the features. Experimental results showed enhancement in the mammogram classification accuracy with a small set of features when compared with using original feature vector. Then we investigated and propose the use of the feature and decision fusion in mammogram classification. In feature-level fusion, two or more extracted feature sets of the same mammogram are concatenated into a single larger fused feature vector to represent the mammogram. Whereas in decision-level fusion, the results of individual classifiers based on distinct features extracted from the same mammogram are combined into a single decision. In this case the final decision is made by majority voting among the results of individual classifiers. Finally, we investigated the use of super resolution as a pre-processing step to enhance the mammograms prior to extracting features. From the preliminary experimental results we conclude that using enhanced mammograms have a positive effect on the performance of the system. Overall, our combination of proposals outperforms several existing schemes published in the literature

    Preface

    Get PDF

    Automated Characterisation and Classification of Liver Lesions From CT Scans

    Get PDF
    Cancer is a general term for a wide range of diseases that can affect any part of the body due to the rapid creation of abnormal cells that grow outside their normal boundaries. Liver cancer is one of the common diseases that cause the death of more than 600,000 each year. Early detection is important to diagnose and reduce the incidence of death. Examination of liver lesions is performed with various medical imaging modalities such as Ultrasound (US), Computer tomography (CT), and Magnetic resonance imaging (MRI). The improvements in medical imaging and image processing techniques have significantly enhanced the interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate. Moreover, CAD systems can help physician, as a second opinion, in characterising lesions and making the diagnostic decision. Thus, CAD systems have become an important research area. Particularly, these systems can provide diagnostic assistance to doctors to improve overall diagnostic accuracy. The traditional methods to characterise liver lesions and differentiate normal liver tissues from abnormal ones are largely dependent on the radiologists experience. Thus, CAD systems based on the image processing and artificial intelligence techniques gained a lot of attention, since they could provide constructive diagnosis suggestions to clinicians for decision making. The liver lesions are characterised through two ways: (1) Using a content-based image retrieval (CBIR) approach to assist the radiologist in liver lesions characterisation. (2) Calculating the high-level features that describe/ characterise the liver lesion in a way that is interpreted by humans, particularly Radiologists/Clinicians, based on the hand-crafted/engineered computational features (low-level features) and learning process. However, the research gap is related to the high-level understanding and interpretation of the medical image contents from the low-level pixel analysis, based on mathematical processing and artificial intelligence methods. In our work, the research gap is bridged if a relation of image contents to medical meaning in analogy to radiologist understanding is established. This thesis explores an automated system for the classification and characterisation of liver lesions in CT scans. Firstly, the liver is segmented automatically by using anatomic medical knowledge, histogram-based adaptive threshold and morphological operations. The lesions and vessels are then extracted from the segmented liver by applying AFCM and Gaussian mixture model through a region growing process respectively. Secondly, the proposed framework categorises the high-level features into two groups; the first group is the high-level features that are extracted from the image contents such as (Lesion location, Lesion focality, Calcified, Scar, ...); the second group is the high-level features that are inferred from the low-level features through machine learning process to characterise the lesion such as (Lesion density, Lesion rim, Lesion composition, Lesion shape,...). The novel Multiple ROIs selection approach is proposed, in which regions are derived from generating abnormality level map based on intensity difference and the proximity distance for each voxel with respect to the normal liver tissue. Then, the association between low-level, high-level features and the appropriate ROI are derived by assigning the ability of each ROI to represents a set of lesion characteristics. Finally, a novel feature vector is built, based on high-level features, and fed into SVM for lesion classification. In contrast with most existing research, which uses low-level features only, the use of high-level features and characterisation helps in interpreting and explaining the diagnostic decision. The methods are evaluated on a dataset containing 174 CT scans. The experimental results demonstrated that the efficacy of the proposed framework in the successful characterisation and classification of the liver lesions in CT scans. The achieved average accuracy was 95:56% for liver lesion characterisation. While the lesion’s classification accuracy was 97:1% for the entire dataset. The proposed framework is developed to provide a more robust and efficient lesion characterisation framework through comprehensions of the low-level features to generate semantic features. The use of high-level features (characterisation) helps in better interpretation of CT liver images. In addition, the difference-of-features using multiple ROIs were developed for robust capturing of lesion characteristics in a reliable way. This is in contrast to the current research trend of extracting the features from the lesion only and not paying much attention to the relation between lesion and surrounding area. The design of the liver lesion characterisation framework is based on the prior knowledge of the medical background to get a better and clear understanding of the liver lesion characteristics in medical CT images
    • …
    corecore