26 research outputs found

    Texture features based microscopic image classification of liver cellular granuloma using artificial neural networks

    Get PDF
    Automated classification of Schistosoma mansoni granulomatous microscopic images of mice liver using Artificial Intelligence (AI) technologies is a key issue for accurate diagnosis and treatment. In this paper, three grey difference statistics-based features, namely three Gray-Level Co-occurrence Matrix (GLCM) based features and fifteen Gray Gradient Co-occurrence Matrix (GGCM) features were calculated by correlative analysis. Ten features were selected for three-level cellular granuloma classification using a Scaled Conjugate Gradient Back-Propagation Neural Network (SCG-BPNN) in the same performance. A cross-entropy is then calculated to evaluate the proposed Sigmoid input and the ten-hidden layer network. The results depicted that SCG-BPNN with texture features performs high recognition rate compared to using morphological features, such as shape, size, contour, thickness and other geometry-based features for the classification. The proposed method also has a high accuracy rate of 87.2% compared to the Back-Propagation Neural Network (BPNN), Back-Propagation Hopfield Neural Network (BPHNN) and Convolutional Neural Network (CNN)

    Classification of mice hepatic granuloma microscopic images based on a deep convolutional neural network

    Get PDF
    Hepatic granuloma develops in the early stage of liver cirrhosis which can seriously injury liver health. At present, the assessment of medical microscopic images is necessary for various diseases and the exploiting of artificial intelligence technology to assist pathology doctors in pre-diagnosis is the trend of future medical development. In this article, we try to classify mice liver microscopic images of normal, granuloma-fibrosis 1 and granuloma-fibrosis2, using convolutional neural networks (CNNs) and two conventional machine learning methods: support vector machine (SVM) and random forest (RF). On account of the included small dataset of 30 mice liver microscopic images, the proposed work included a preprocessing stage to deal with the problem of insufficient image number, which included the cropping of the original microscopic images to small patches, and the disorderly recombination after cropping and labeling the cropped patches In addition, recognizable texture features are extracted and selected using gray the level co-occurrence matrix (GLCM), local binary pattern (LBP) and Pearson correlation coefficient (PCC), respectively. The results established a classification accuracy of 82.78% of the proposed CNN based classifiers to classify 3 types of images. In addition, the confusion matrix figures out that the accuracy of the classification results using the proposed CNNs based classifiers for the normal class, granuloma-fibrosisl, and granuloma-fibrosis2 were 92.5%, 76.67%, and 79.17%, respectively. The comparative study of the proposed CNN based classifier and the SVM and RF proved the superiority of the CNNs showing its promising performance for clinical cases

    Deep-learning framework to detect lung abnormality - A study with chest X-Ray and lung CT scan images

    Get PDF
    Lung abnormalities are highly risky conditions in humans. The early diagnosis of lung abnormalities is essential to reduce the risk by enabling quick and efficient treatment. This research work aims to propose a Deep-Learning (DL) framework to examine lung pneumonia and cancer. This work proposes two different DL techniques to assess the considered problem: (i) The initial DL method, named a modified AlexNet (MAN), is proposed to classify chest X-Ray images into normal and pneumonia class. In the MAN, the classification is implemented using with Support Vector Machine (SVM), and its performance is compared against Softmax. Further, its performance is validated with other pre-trained DL techniques, such as AlexNet, VGG16, VGG19 and ResNet50. (ii) The second DL work implements a fusion of handcrafted and learned features in the MAN to improve classification accuracy during lung cancer assessment. This work employs serial fusion and Principal Component Analysis (PCA) based features selection to enhance the feature vector. The performance of this DL frame work is tested using benchmark lung cancer CT images of LIDC-IDRI and classification accuracy (97.27%) is attained. (c) 2019 Elsevier B.V

    Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks

    Get PDF
    Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process

    A Non-Invasive Follicular Thyroid Cancer Risk Prediction System Based on Deep Hybrid Multi-feature Fusion Network

    Get PDF
    Objective A non-invasive assessment of the risk of benign and malignant follicular thyroid cancer is invaluable in the choice of treatment options. The extraction and fusion of multidimensional features from ultrasound images of follicular thyroid cancer is decisive in improving the accuracy of identifying benign and malignant thyroid cancer. This paper presents a non-invasive preoperative benign and malignant risk assessment system for follicular thyroid cancer, based on the proposed deep feature extraction and fusion of ultrasound images of follicular thyroid cancer. Methods First, this study uses a convolution neural network (CNN) to obtain a global feature map of the image, and the fusion of global features cropped to local features to identify tumor images. Secondly, this tumour image is also extracted by googleNet and ResNet respectively to extract features and recognize the image. Finally, we employ an averaging algorithm to obtain the final recognition results.Results The experimental results show that the method proposed in this study achieved 89.95% accuracy, 88.46% sensitivity, 91.30% specificity and an AUC value of 96.69% in the local dataset obtained from Peking University Shenzhen Hospital, all of which are far superior to other models.Conclusion In this study, a non-invasive risk prediction system is proposed for ultrasound images of thyroid follicular tumours. We solve the problem of unbalanced sample distribution by means of an image enhancement algorithm. In order to obtain enough features to differentiate ultrasound images, a three-branched feature extraction network was designed in this study, and a balance of sensitivity and specificity is ensured by an averaging algorithm

    Simplified inverse filter tracked affective acoustic signals classification incorporating deep convolutional neural networks

    Get PDF
    Facial expressions, verbal, behavioral, such as limb movements, and physiological features are vital ways for affective human interactions. Researchers have given machines the ability to recognize affective communication through the above modalities in the past decades. In addition to facial expressions, changes in the level of sound, strength, weakness, and turbulence will also convey affective. Extracting affective feature parameters from the acoustic signals have been widely applied in customer service, education, and the medical field. In this research, an improved AlexNet-based deep convolutional neural network (A-DCNN) is presented for acoustic signal recognition. Firstly, preprocessed on signals using simplified inverse filter tracking (SIFT) and short-time Fourier transform (STFT), Mel frequency Cepstrum (MFCC) and waveform-based segmentation were deployed to create the input for the deep neural network (DNN), which was applied widely in signals preprocess for most neural networks. Secondly, acoustic signals were acquired from the public Ryerson Audio-Visual Database of Affective Speech and Song (RAVDESS) affective speech audio system. Through the acoustic signal preprocessing tools, the basic features of the kind of sound signals were calculated and extracted. The proposed DNN based on improved AlexNet has a 95.88% accuracy on classifying eight affective of acoustic signals. By comparing with some linear classifications, such as decision table (DT) and Bayesian inference (BI) and other deep neural networks, such as AlexNet+SVM, recurrent convolutional neural network (R-CNN), etc., the proposed method achieves high effectiveness on the accuracy (A), sensitivity (S1), positive predictive (PP), and f1-score (F1). Acoustic signals affective recognition and classification can be potentially applied in industrial product design through measuring consumers’ affective responses to products; by collecting relevant affective sound data to understand the popularity of the product, and furthermore, to improve the product design and increase the market responsiveness

    Spatially Resolved Immunometabolism to Understand Infectious Disease Progression

    Get PDF
    Infectious diseases, including those of viral, bacterial, fungal, and parasitic origin are often characterized by focal inflammation occurring in one or more distinct tissues. Tissue-specific outcomes of infection are also evident in many infectious diseases, suggesting that the local microenvironment may instruct complex and diverse innate and adaptive cellular responses resulting in locally distinct molecular signatures. In turn, these molecular signatures may both drive and be responsive to local metabolic changes in immune as well as non-immune cells, ultimately shaping the outcome of infection. Given the spatial complexity of immune and inflammatory responses during infection, it is evident that understanding the spatial organization of transcripts, proteins, lipids, and metabolites is pivotal to delineating the underlying regulation of local immunity. Molecular imaging techniques like mass spectrometry imaging and spatially resolved, highly multiplexed immunohistochemistry and transcriptomics can define detailed metabolic signatures at the microenvironmental level. Moreover, a successful complementation of these two imaging techniques would allow multi-omics analyses of inflammatory microenvironments to facilitate understanding of disease pathogenesis and identify novel targets for therapeutic intervention. Here, we describe strategies for downstream data analysis of spatially resolved multi-omics data and, using leishmaniasis as an exemplar, describe how such analysis can be applied in a disease-specific context

    Evaluation of PD-L1 expression in various formalin-fixed paraffin embedded tumour tissue samples using SP263, SP142 and QR1 antibody clones

    Get PDF
    Background & objectives: Cancer cells can avoid immune destruction through the inhibitory ligand PD-L1. PD-1 is a surface cell receptor, part of the immunoglobulin family. Its ligand PD-L1 is expressed by tumour cells and stromal tumour infltrating lymphocytes (TIL). Methods: Forty-four cancer cases were included in this study (24 triple-negative breast cancers (TNBC), 10 non-small cell lung cancer (NSCLC) and 10 malignant melanoma cases). Three clones of monoclonal primary antibodies were compared: QR1 (Quartett), SP 142 and SP263 (Ventana). For visualization, ultraView Universal DAB Detection Kit from Ventana was used on an automated platform for immunohistochemical staining Ventana BenchMark GX. Results: Comparing the sensitivity of two different clones on same tissue samples from TNBC, we found that the QR1 clone gave higher percentage of positive cells than clone SP142, but there was no statistically significant difference. Comparing the sensitivity of two different clones on same tissue samples from malignant melanoma, the SP263 clone gave higher percentage of positive cells than the QR1 clone, but again the difference was not statistically significant. Comparing the sensitivity of two different clones on same tissue samples from NSCLC, we found higher percentage of positive cells using the QR1 clone in comparison with the SP142 clone, but once again, the difference was not statistically significant. Conclusion: The three different antibody clones from two manufacturers Ventana and Quartett, gave comparable results with no statistically significant difference in staining intensity/ percentage of positive tumour and/or immune cells. Therefore, different PD-L1 clones from different manufacturers can potentially be used to evaluate the PD- L1 status in different tumour tissues. Due to the serious implications of the PD-L1 analysis in further treatment decisions for cancer patients, every antibody clone, staining protocol and evaluation process should be carefully and meticulously validated
    corecore