9 research outputs found

    Lung disease classification using GLCM and deep features from different deep learning architectures with principal component analysis

    Get PDF
    Lung disease classification is an important stage in implementing a Computer Aided Diagnosis (CADx) system. CADx systems can aid doctors as a second rater to increase diagnostic accuracy for medical applications. It has also potential to reduce waiting time and increasing patient throughput when hospitals high workload. Conventional lung classification systems utilize textural features. However textural features may not be enough to describe properties of an image. Deep features are an emerging source of features that can combat the weaknesses of textural features. The goal of this study is to propose a lung disease classification framework using deep features from five different deep networks and comparing its results with the conventional Gray-level Co-occurrence Matrix (GLCM). This study used a dataset of 81 diseased and 15 normal patients with five levels of High Resolution Computed Tomography (HRCT) slices. A comparison of five different deep learning networks namely, Alexnet, VGG16, VGG19, Res50 and Res101, with textural features from Gray-level Co-occurrence Matrix (GLCM) was performed. This study used a K-fold validation protocol with K = 2, 3, 5 and 10. This study also compared using five classifiers; Decision Tree, Support Vector Machine, Linear Discriminant Analysis, Regression and k-nearest neighbor (k-NN) classifiers. The usage of PCA increased the classification accuracy from 92.01% to 97.40% when using k-NN classifier. This was achieved with only using 14 features instead of the initial 1000 features. Using SVM classifier, a maximum accuracy of 100% was achieved when using all five of the deep learning features. Thus deep features show a promising application for classifying diseased and normal lungs

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    Local Geometric Transformations in Image Analysis

    Get PDF
    The characterization of images by geometric features facilitates the precise analysis of the structures found in biological micrographs such as cells, proteins, or tissues. In this thesis, we study image representations that are adapted to local geometric transformations such as rotation, translation, and scaling, with a special emphasis on wavelet representations. In the first part of the thesis, our main interest is in the analysis of directional patterns and the estimation of their location and orientation. We explore steerable representations that correspond to the notion of rotation. Contrarily to classical pattern matching techniques, they have no need for an a priori discretization of the angle and for matching the filter to the image at each discretized direction. Instead, it is sufficient to apply the filtering only once. Then, the rotated filter for any arbitrary angle can be determined by a systematic and linear transformation of the initial filter. We derive the Cramér-Rao bounds for steerable filters. They allow us to select the best harmonics for the design of steerable detectors and to identify their optimal radial profile. We propose several ways to construct optimal representations and to build powerful and effective detector schemes; in particular, junctions of coinciding branches with local orientations. The basic idea of local transformability and the general principles that we utilize to design steerable wavelets can be applied to other geometric transformations. Accordingly, in the second part, we extend our framework to other transformation groups, with a particular interest in scaling. To construct representations in tune with a notion of local scale, we identify the possible solutions for scalable functions and give specific criteria for their applicability to wavelet schemes. Finally, we propose discrete wavelet frames that approximate a continuous wavelet transform. Based on these results, we present a novel wavelet-based image-analysis software that provides a fast and automatic detection of circular patterns, combined with a precise estimation of their size

    The impact of arterial input function determination variations on prostate dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic modeling: a multicenter data analysis challenge, part II

    Get PDF
    This multicenter study evaluated the effect of variations in arterial input function (AIF) determination on pharmacokinetic (PK) analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using the shutter-speed model (SSM). Data acquired from eleven prostate cancer patients were shared among nine centers. Each center used a site-specific method to measure the individual AIF from each data set and submitted the results to the managing center. These AIFs, their reference tissue-adjusted variants, and a literature population-averaged AIF, were used by the managing center to perform SSM PK analysis to estimate Ktrans (volume transfer rate constant), ve (extravascular, extracellular volume fraction), kep (efflux rate constant), and Ď„i (mean intracellular water lifetime). All other variables, including the definition of the tumor region of interest and precontrast T1 values, were kept the same to evaluate parameter variations caused by variations in only the AIF. Considerable PK parameter variations were observed with within-subject coefficient of variation (wCV) values of 0.58, 0.27, 0.42, and 0.24 for Ktrans, ve, kep, and Ď„i, respectively, using the unadjusted AIFs. Use of the reference tissue-adjusted AIFs reduced variations in Ktrans and ve (wCV = 0.50 and 0.10, respectively), but had smaller effects on kep and Ď„i (wCV = 0.39 and 0.22, respectively). kep is less sensitive to AIF variation than Ktrans, suggesting it may be a more robust imaging biomarker of prostate microvasculature. With low sensitivity to AIF uncertainty, the SSM-unique Ď„i parameter may have advantages over the conventional PK parameters in a longitudinal study

    3D Riesz-wavelet based Covariance descriptors for texture classification of lung nodule tissue in CT

    No full text
    In this paper we present a novel technique for characterizing and classifying 3D textured volumes belonging to different lung tissue types in 3D CT images. We build a volume-based 3D descriptor, robust to changes of size, rigid spatial transformations and texture variability, thanks to the integration of Riesz-wavelet features within a Covariance-based descriptor formulation. 3D Riesz features characterize the morphology of tissue density due to their response to changes in intensity in CT images. These features are encoded in a Covariance-based descriptor formulation: this provides a compact and flexible representation thanks to the use of feature variations rather than dense features themselves and adds robustness to spatial changes. Furthermore, the particular symmetric definite positive matrix form of these descriptors causes them to lay in a Riemannian manifold. Thus, descriptors can be compared with analytical measures, and accurate techniques from machine learning and clustering can be adapted to their spatial domain. Additionally we present a classification model following a “Bag of Covariance Descriptors” paradigm in order to distinguish three different nodule tissue types in CT: solid, ground-glass opacity, and healthy lung. The method is evaluated on top of an acquired dataset of 95 patients with manually delineated ground truth by radiation oncology specialists in 3D, and quantitative sensitivity and specificity values are presented
    corecore