13 research outputs found

    Grid Analysis of Radiological Data

    Get PDF
    IGI-Global Medical Information Science Discoveries Research Award 2009International audienceGrid technologies and infrastructures can contribute to harnessing the full power of computer-aided image analysis into clinical research and practice. Given the volume of data, the sensitivity of medical information, and the joint complexity of medical datasets and computations expected in clinical practice, the challenge is to fill the gap between the grid middleware and the requirements of clinical applications. This chapter reports on the goals, achievements and lessons learned from the AGIR (Grid Analysis of Radiological Data) project. AGIR addresses this challenge through a combined approach. On one hand, leveraging the grid middleware through core grid medical services (data management, responsiveness, compression, and workflows) targets the requirements of medical data processing applications. On the other hand, grid-enabling a panel of applications ranging from algorithmic research to clinical use cases both exploits and drives the development of the services

    Application of Principal Component Analysis to advancing digital phenotyping of plant disease in the context of limited memory for training data storage

    Get PDF
    Despite its widespread employment as a highly efficient dimensionality reduction technique, limited research has been carried out on the advantage of Principal Component Analysis (PCA)–based compression/reconstruction of image data to machine learning-based image classification performance and storage space optimization. To address this limitation, we designed a study in which we compared the performances of two Convolutional Neural Network-Random Forest Algorithm (CNN-RF) guava leaf image classification models developed using training data from a number of original guava leaf images contained in a predefined amount of storage space (on the one hand), and a number of PCA compressed/reconstructed guava leaf images contained in the same amount of storage space (on the other hand), on the basis of four criteria – Accuracy, F1-Score, Phi Coefficient and the Fowlkes–Mallows index. Our approach achieved a 1:100 image compression ratio (99.00% image compression) which was comparatively much better than previous results achieved using other algorithms like arithmetic coding (1:1.50), wavelet transform (90.00% image compression), and a combination of three transform-based techniques – Discrete Fourier (DFT), Discrete Wavelet (DWT) and Discrete Cosine (DCT) (1:22.50). From a subjective visual quality perspective, the PCA compressed/reconstructed guava leaf images presented almost no loss of image detail. Finally, the CNN-RF model developed using PCA compressed/reconstructed guava leaf images outperformed the CNN-RF model developed using original guava leaf images by 0.10% accuracy increase, 0.10 F1-Score increase, 0.18 Phi Coefficient increase and 0.09 Fowlkes–Mallows increase

    Perceptually lossless coding of medical images - from abstraction to reality

    Get PDF
    This work explores a novel vision model based coding approach to encode medical images at a perceptually lossless quality, within the framework of the JPEG 2000 coding engine. Perceptually lossless encoding offers the best of both worlds, delivering images free of visual distortions and at the same time providing significantly greater compression ratio gains over its information lossless counterparts. This is achieved through a visual pruning function, embedded with an advanced model of the human visual system to accurately identify and to efficiently remove visually irrelevant/insignificant information. In addition, it maintains bit-stream compliance with the JPEG 2000 coding framework and subsequently is compliant with the Digital Communications in Medicine standard (DICOM). Equally, the pruning function is applicable to other Discrete Wavelet Transform based image coders, e.g., The Set Partitioning in Hierarchical Trees. Further significant coding gains are exploited through an artificial edge segmentatio n algorithm and a novel arithmetic pruning algorithm. The coding effectiveness and qualitative consistency of the algorithm is evaluated through a double-blind subjective assessment with 31 medical experts, performed using a novel 2-staged forced choice assessment that was devised for medical experts, offering the benefits of greater robustness and accuracy in measuring subjective responses. The assessment showed that no differences of statistical significance were perceivable between the original images and the images encoded by the proposed coder

    Visualization of large medical volume data

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Remote access computed tomography colonography

    Get PDF
    This thesis presents a novel framework for remote access Computed Tomography Colonography (CTC). The proposed framework consists of several integrated components: medical image data delivery, 2D image processing, 3D visualisation, and feedback provision. Medical image data sets are notoriously large and preserving the integrity of the patient data is essential. This makes real-time delivery and visualisation a key challenge. The main contribution of this work is the development of an efficient, lossless compression scheme to minimise the size of the data to be transmitted, thereby alleviating transmission time delays. The scheme utilises prior knowledge of anatomical information to divide the data into specific regions. An optimised compression method for each anatomical region is then applied. An evaluation of this compression technique shows that the proposed ‘divide and conquer’ approach significantly improves upon the level of compression achieved using more traditional global compression schemes. Another contribution of this work resides in the development of an improved volume rendering technique that provides real-time 3D visualisations of regions within CTC data sets. Unlike previous hardware acceleration methods which rely on dedicated devices, this approach employs a series of software acceleration techniques based on the characteristic properties of CTC data. A quantitative and qualitative evaluation indicates that the proposed method achieves real-time performance on a low-cost PC platform without sacrificing any image quality. Fast data delivery and real-time volume rendering represent the key features that are required for remote access CTC. These features are ultimately combined with other relevant CTC functionality to create a comprehensive, high-performance CTC framework, which makes remote access CTC feasible, even in the case of standard Web clients with low-speed data connections

    Image quality assessment by overlapping task-specific and task-agnostic measures: application to prostate multiparametric MR images for cancer segmentation

    Get PDF
    Image quality assessment (IQA) in medical imaging can be used to ensure that downstream clinical tasks can be reliably performed. Quantifying the impact of an image on the specific target tasks, also named as task amenability, is needed. A task-specific IQA has recently been proposed to learn an image-amenability-predicting controller simultaneously with a target task predictor. This allows for the trained IQA controller to measure the impact an image has on the target task performance, when this task is performed using the predictor, e.g. segmentation and classification neural networks in modern clinical applications. In this work, we propose an extension to this task-specific IQA approach, by adding a task-agnostic IQA based on auto-encoding as the target task. Analysing the intersection between low-quality images, deemed by both the task-specific and task-agnostic IQA, may help to differentiate the underpinning factors that caused the poor target task performance. For example, common imaging artefacts may not adversely affect the target task, which would lead to a low task-agnostic quality and a high task-specific quality, whilst individual cases considered clinically challenging, which can not be improved by better imaging equipment or protocols, is likely to result in a high task-agnostic quality but a low task-specific quality. We first describe a flexible reward shaping strategy which allows for the adjustment of weighting between task-agnostic and task-specific quality scoring. Furthermore, we evaluate the proposed algorithm using a clinically challenging target task of prostate tumour segmentation on multiparametric magnetic resonance (mpMR) images, from 850 patients. The proposed reward shaping strategy, with appropriately weighted task-specific and task-agnostic qualities, successfully identified samples that need re-acquisition due to defected imaging process.Comment: Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://www.melba-journal.or

    Image quality assessment for machine learning tasks using meta-reinforcement learning

    Get PDF
    In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images

    SPARSE RECOVERY BY NONCONVEX LIPSHITZIAN MAPPINGS

    Get PDF
    In recent years, the sparsity concept has attracted considerable attention in areas of applied mathematics and computer science, especially in signal and image processing fields. The general framework of sparse representation is now a mature concept with solid basis in relevant mathematical fields, such as probability, geometry of Banach spaces, harmonic analysis, theory of computability, and information-based complexity. Together with theoretical and practical advancements, also several numeric methods and algorithmic techniques have been developed in order to capture the complexity and the wide scope that the theory suggests. Sparse recovery relays over the fact that many signals can be represented in a sparse way, using only few nonzero coefficients in a suitable basis or overcomplete dictionary. Unfortunately, this problem, also called `0-norm minimization, is not only NP-hard, but also hard to approximate within an exponential factor of the optimal solution. Nevertheless, many heuristics for the problem has been obtained and proposed for many applications. This thesis provides new regularization methods for the sparse representation problem with application to face recognition and ECG signal compression. The proposed methods are based on fixed-point iteration scheme which combines nonconvex Lipschitzian-type mappings with canonical orthogonal projectors. The first are aimed at uniformly enhancing the sparseness level by shrinking effects, the latter to project back into the feasible space of solutions. In the second part of this thesis we study two applications in which sparseness has been successfully applied in recent areas of the signal and image processing: the face recognition problem and the ECG signal compression problem

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior

    Computer-Aided Diagnosis for Early Identification of Multi-Type Dementia using Deep Neural Networks

    Get PDF
    With millions of people suffering from dementia worldwide, the global prevalence of this condition has a significant impact on the global economy. As well, its prevalence has a negative impact on patients’ lives and their caregivers’ physical and emotional states. Dementia can be developed as a result of some risk factors as well as it has many forms whose signs are sometimes similar. While there is currently no cure for dementia, effective early diagnosis is essential in managing it. Early diagnosis helps people in finding suitable therapies that reduce or even prevent further deterioration of cognitive abilities, and in taking control of their conditions and planning for the future. Furthermore, it also facilitates the research efforts to understand causes and signs of dementia. Early diagnosis is based on the classification of features extracted from three-dimensional brain images. The features have to accurately capture main dementia-related anatomical variations of brain structures, such as hippocampus size, gray and white matter tissues’ volumes, and brain volume. In recent years, numerous researchers have been seeking the development of new or improved Computer-Aided Diagnosis (CAD) technologies to accurately detect dementia. The CAD approaches aim to assist radiologists in increasing the accuracy of the diagnosis and reducing false positives. However, there is a number of limitations and open issues in the state-of-the-art, that need to be addressed. These limitations include that literature to date has focused on differentiating multi-stage of Alzheimer’s disease severity ignoring other dementia types that can be as devastating or even more. Furthermore, the high dimensionality of neuroimages, as well as the complexity of dementia biomarkers, can hinder classification performance. Moreover, the augmentation of neuroimaging analysis with contextual information has received limited attention to-date due to the discrepancies and irregularities of the various forms of data. This work focuses on addressing the need for differentiating between multiple types of dementia in early stages. The objective of this thesis is to automatically discriminate normal controls from patients with various types of dementia in early phases of the disease. This thesis proposes a novel CAD approach, integrating a stacked sparse auto-encoder (SSAE) with a two- dimensional convolutional neural network (CNN) for early identification of multiple types of dementia based on using the discriminant features, extracted from neuroimages, incorporated with the context information. By applying SSAE to intensities extracted from magnetic resonance (MR) neuroimages, SSAE can reduce the high dimensionality of neuroimages and learn changes, exploiting important discrimination features for classification. This research work also proposes to integrate features extracted from MR neuroimages with patients’ contextual information through fusing multi-classifier to enhance the early prediction of various types of dementia. The effectiveness of the proposed method is evaluated on OASIS dataset using five different relevant performance metrics, including accuracy, f1-score, sensitivity, specificity, and precision-recall curve. Across a cohort of 4000 MR neuroimages (176 × 176) as well as the contextual information, and clinical diagnosis of patients serving as the ground truth, the proposed CAD approach was shown to have an improved F-measure of 93% and an average area under Precision-Recall curve of 94%. The proposed method provides a significant improvement in classification output, resulted in high and reproducible accuracy rates of 95% with a sensitivity of 93%, and a specificity of 88%
    corecore