419 research outputs found

    Identifying potential circulating miRNA biomarkers for the diagnosis and prediction of ovarian cancer using machine-learning approach: application of Boruta

    Get PDF
    Introduction: In gynecologic oncology, ovarian cancer is a great clinical challenge. Because of the lack of typical symptoms and effective biomarkers for noninvasive screening, most patients develop advanced-stage ovarian cancer by the time of diagnosis. MicroRNAs (miRNAs) are a type of non-coding RNA molecule that has been linked to human cancers. Specifying diagnostic biomarkers to determine non-cancer and cancer samples is difficult.Methods: By using Boruta, a novel random forest-based feature selection in the machine-learning techniques, we aimed to identify biomarkers associated with ovarian cancer using cancerous and non-cancer samples from the Gene Expression Omnibus (GEO) database: GSE106817. In this study, we used two independent GEO data sets as external validation, including GSE113486 and GSE113740. We utilized five state-of-the-art machine-learning algorithms for classification: logistic regression, random forest, decision trees, artificial neural networks, and XGBoost.Results: Four models discovered in GSE113486 had an AUC of 100%, three in GSE113740 with AUC of over 94%, and four in GSE113486 with AUC of over 94%. We identified 10 miRNAs to distinguish ovarian cancer cases from normal controls: hsa-miR-1290, hsa-miR-1233-5p, hsa-miR-1914-5p, hsa-miR-1469, hsa-miR-4675, hsa-miR-1228-5p, hsa-miR-3184-5p, hsa-miR-6784-5p, hsa-miR-6800-5p, and hsa-miR-5100. Our findings suggest that miRNAs could be used as possible biomarkers for ovarian cancer screening, for possible intervention

    Breast-Lesion Characterization using Textural Features of Quantitative Ultrasound Parametric Maps

    Get PDF
    © 2017 The Author(s). This study evaluated, for the first time, the efficacy of quantitative ultrasound (QUS) spectral parametric maps in conjunction with texture-analysis techniques to differentiate non-invasively benign versus malignant breast lesions. Ultrasound B-mode images and radiofrequency data were acquired from 78 patients with suspicious breast lesions. QUS spectral-analysis techniques were performed on radiofrequency data to generate parametric maps of mid-band fit, spectral slope, spectral intercept, spacing among scatterers, average scatterer diameter, and average acoustic concentration. Texture-analysis techniques were applied to determine imaging biomarkers consisting of mean, contrast, correlation, energy and homogeneity features of parametric maps. These biomarkers were utilized to classify benign versus malignant lesions with leave-one-patient-out cross-validation. Results were compared to histopathology findings from biopsy specimens and radiology reports on MR images to evaluate the accuracy of technique. Among the biomarkers investigated, one mean-value parameter and 14 textural features demonstrated statistically significant differences (p < 0.05) between the two lesion types. A hybrid biomarker developed using a stepwise feature selection method could classify the legions with a sensitivity of 96%, a specificity of 84%, and an AUC of 0.97. Findings from this study pave the way towards adapting novel QUS-based frameworks for breast cancer screening and rapid diagnosis in clinic

    Endoscopic image analysis using Deep Convolutional GAN and traditional data

    Get PDF
    One big challenge encountered in the medical field is the availability of only limited annotated datasets for research. On the other hand, medical image annotation requires a lot of input from medical experts. It is noticed that machine learning and deep learning are producing better results in the area of image classification. However, these techniques require large training datasets, which is the major concern for medical image processing. Another issue is the unbalanced nature of the different classes of data, leading to the under-representation of some classes. Data augmentation has emerged as a good technique to deal with these challenges. In this work, we have applied traditional data augmentation and Generative Adversarial Network (GAN) on endoscopic esophagus images to increase the number of images for the training datasets. Eventually we have applied two deep learning models namely ResNet50 and VGG16 to extract and represent the relevant cancer features. The results show that the accuracy of the model increases with data augmentation and GAN. In fact, GAN has achieved the highest accuracy, that is, 94% over non-augmented training set and traditional data augmentation for VGG16

    Automatic Esophageal Abnormality Detection and Classification

    Get PDF
    Esophageal cancer is counted as one of the deadliest cancers worldwide ranking the sixth among all types of cancers. Early esophageal cancer typically causes no symp- toms and mainly arises from overlooked/untreated premalignant abnormalities in the esophagus tube. Endoscopy is the main tool used for the detection of abnormalities, and the cell deformation stage is confirmed by taking biopsy samples. The process of detection and classification is considered challenging for several reasons such as; different types of abnormalities (including early cancer stages) can be located ran- domly throughout the esophagus tube, abnormal regions can have various sizes and appearances which makes it difficult to capture, and failure in discriminating between the columnar mucosa from the metaplastic epithelium. Although many studies have been conducted, it remains a challenging task and improving the accuracy of auto- matically classifying and detecting different esophageal abnormalities is an ongoing field. This thesis aims to develop novel automated methods for the detection and classification of the abnormal esophageal regions (precancerous and cancerous) from endoscopic images and videos. In this thesis, firstly, the abnormality stage of the esophageal cell deformation is clas- sified from confocal laser endomicroscopy (CLE) images. The CLE is an endoscopic tool that provides a digital pathology view of the esophagus cells. The classifica- tion is achieved by enhancing the internal features of the CLE image, using a novel enhancement filter that utilizes fractional integration and differentiation. Different imaging features including, Multi-Scale pyramid rotation LBP (MP-RLBP), gray level co-occurrence matrices (GLCM), fractal analysis, fuzzy LBP and maximally stable extremal regions (MSER), are calculated from the enhanced image to assure a robust classification result. The support vector machine (SVM) and random forest (RF) classifiers are employed to classify each image into its pathology stage. Secondly, we propose an automatic detection method to locate abnormality regions from high definition white light (HD-WLE) endoscopic images. We first investigate the performance of different deep learning detection methods on our dataset. Then we propose an approach that combines hand-designed Gabor features with extracted convolutional neural network features that are used by the Faster R-CNN to detect abnormal regions. Moreover, to further improve the detection performance, we pro- pose a novel two-input network named GFD-Faster RCNN. The proposed method generates a Gabor fractal image from the original endoscopic image using Gabor filters. Then features are learned separately from the endoscopic image and the gen- erated Gabor fractal image using the densely connected convolutional network to detect abnormal esophageal regions. Thirdly, we present a novel model to detect the abnormal regions from endoscopic videos. We design a 3D Sequential DenseConvLstm network to extract spatiotem- poral features from the input videos that are utilized by a region proposal network and ROI pooling layer to detect abnormality regions in each frame throughout the video. Additionally, we suggest an FS-CRF post-processing method that incorpor- ates the Conditional Random Field (CRF) on a frame-based level to recover missed abnormal regions in neighborhood frames within the same clip. The methods are evaluated on four datasets: (1) CLE dataset used for the classific- ation model, (2) Publicly available dataset named Kvasir, (3) MICCAI’15 Endovis challenge dataset, Both datasets (2) and (3) are used for the evaluation of detection model from endoscopic images. Finally, (4) Gastrointestinal Atlas dataset used for the evaluation of the video detection model. The experimental results demonstrate promising results of the different models and have outperformed the state-of-the-art methods

    Quantitative DWI as an Early Imaging Biomarker of the Response to Chemoradiation in Esophageal Cancer

    Get PDF
    For patients diagnosed with stages IIa-IIb esophageal cancer, the current standard of care treatment is tri-modality therapy (TMT), where neoadjuvant chemoradiation (nCRT) is followed by surgical resection. Histopathology of resected tumors reveals that pathological complete response (pCR) is achieved in 20-30% of patients through nCRT alone. Because of the high mortality and morbidity associated with esophagectomy, it may be advantageous for patients exhibiting pCR from nCRT alone to be placed under observation rather than completing their TMT. Therefore, a method for predicting response at an early time-point during nCRT is highly desirable. Conventional methods such as endoscopic ultrasound, re-biopsy, and morphologic imaging are insufficient for this purpose. During nCRT, morphologic changes in tumors are often preceded by changes in the tumor biology. Diffusion Weighed Imaging (DWI) is an MRI modality which is sensitive to microscopic motion of water molecules in tissue. Quantitative DWI provides a measure of the cellular microenvironment which is impacted by cellularity, extra-cellular volume fraction, structure of the extracellular matrix, and cellular membranes. This work sought to investigate if changes in quantitative DWI may be used as an early imaging biomarker for the prediction of response to nCRT in esophageal cancer. DWI scans were performed on a small group of esophageal cancer patients (stages IIa to IIIb) before, at interim, and after completion of their nCRT. Quantitative diffusion parameter maps were estimated for DWI scans using the following models of diffusion: mono-exponential, intra-voxel incoherent motion (IVIM), and kurtosis. Summary measures of quantitative diffusion parameters were extracted from tumor voxels through volumetric contouring. These summary measures were retrospectively compared between histopathologically confirmed groupings of patients as pCR and non-pCR. The study found that the relative change in mean ADC could completely separate groupings of pCR and non-pCR patients (AUC=1) at a cutoff of 27.7%. Measurement by volume contouring was shown to be highly reproducible between readers. This pilot study demonstrates the promise of using DWI for organ sparing approaches after nCRT in esophageal cancer

    Esophageal Abnormality Detection Using DenseNet Based Faster R-CNN With Gabor Features

    Get PDF
    Early detection of esophageal abnormalities can help in preventing the progression of the disease into later stages. During esophagus examination, abnormalities are often overlooked due to the irregular shape, variable size, and the complex surrounding area which requires a significant effort and experience. In this paper, a novel deep learning model which is based on faster region-based convolutional neural network (Faster R-CNN) is presented to automatically detect abnormalities in the esophagus from endoscopic images. The proposed detection system is based on a combination of Gabor handcrafted features with the CNN features. The densely connected convolutional networks (DenseNets) architecture is embraced to extract the CNN features providing a strengthened feature propagation between the layers and alleviate the vanishing gradient problem. To address the challenges of detecting abnormal complex regions, we propose fusing extracted Gabor features with the CNN features through concatenation to enhance texture details in the detection stage. Our newly designed architecture is validated on two datasets (Kvasir and MICCAI 2015). Regarding the Kvasir, the results show an outstanding performance with a recall of 90.2% and a precision of 92.1% with a mean of average precision (mAP) of 75.9%. While for the MICCAI 2015 dataset, the model is able to surpass the state-of-the-art performance with 95% recall and 91% precision with mAP value of 84%. The experimental results demonstrate that the system is able to detect abnormalities in endoscopic images with good performance without any human intervention

    Chemotherapy-Response Monitoring of Breast Cancer Patients Using Quantitative Ultrasound-Based Intra-Tumour Heterogeneities

    Get PDF
    © 2017 The Author(s). Anti-cancer therapies including chemotherapy aim to induce tumour cell death. Cell death introduces alterations in cell morphology and tissue micro-structures that cause measurable changes in tissue echogenicity. This study investigated the effectiveness of quantitative ultrasound (QUS) parametric imaging to characterize intra-tumour heterogeneity and monitor the pathological response of breast cancer to chemotherapy in a large cohort of patients (n = 100). Results demonstrated that QUS imaging can non-invasively monitor pathological response and outcome of breast cancer patients to chemotherapy early following treatment initiation. Specifically, QUS biomarkers quantifying spatial heterogeneities in size, concentration and spacing of acoustic scatterers could predict treatment responses of patients with cross-validated accuracies of 82 ± 0.7%, 86 ± 0.7% and 85 ± 0.9% and areas under the receiver operating characteristic (ROC) curve of 0.75 ± 0.1, 0.80 ± 0.1 and 0.89 ± 0.1 at 1, 4 and 8 weeks after the start of treatment, respectively. The patients classified as responders and non-responders using QUS biomarkers demonstrated significantly different survivals, in good agreement with clinical and pathological endpoints. The results form a basis for using early predictive information on survival-linked patient response to facilitate adapting standard anti-cancer treatments on an individual patient basis

    A Powerful Paradigm for Cardiovascular Risk Stratification Using Multiclass, Multi-Label, and Ensemble-Based Machine Learning Paradigms: A Narrative Review

    Get PDF
    Background and Motivation: Cardiovascular disease (CVD) causes the highest mortality globally. With escalating healthcare costs, early non-invasive CVD risk assessment is vital. Conventional methods have shown poor performance compared to more recent and fast-evolving Artificial Intelligence (AI) methods. The proposed study reviews the three most recent paradigms for CVD risk assessment, namely multiclass, multi-label, and ensemble-based methods in (i) office-based and (ii) stress-test laboratories. Methods: A total of 265 CVD-based studies were selected using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) model. Due to its popularity and recent development, the study analyzed the above three paradigms using machine learning (ML) frameworks. We review comprehensively these three methods using attributes, such as architecture, applications, pro-and-cons, scientific validation, clinical evaluation, and AI risk-of-bias (RoB) in the CVD framework. These ML techniques were then extended under mobile and cloud-based infrastructure. Findings: Most popular biomarkers used were office-based, laboratory-based, image-based phenotypes, and medication usage. Surrogate carotid scanning for coronary artery risk prediction had shown promising results. Ground truth (GT) selection for AI-based training along with scientific and clinical validation is very important for CVD stratification to avoid RoB. It was observed that the most popular classification paradigm is multiclass followed by the ensemble, and multi-label. The use of deep learning techniques in CVD risk stratification is in a very early stage of development. Mobile and cloud-based AI technologies are more likely to be the future. Conclusions: AI-based methods for CVD risk assessment are most promising and successful. Choice of GT is most vital in AI-based models to prevent the RoB. The amalgamation of image-based strategies with conventional risk factors provides the highest stability when using the three CVD paradigms in non-cloud and cloud-based frameworks

    Machine Learning-Based Models for Prediction of Toxicity Outcomes in Radiotherapy

    Get PDF
    In order to limit radiotherapy (RT)-related side effects, effective toxicity prediction and assessment schemes are essential. In recent years, the growing interest toward artificial intelligence and machine learning (ML) within the science community has led to the implementation of innovative tools in RT. Several researchers have demonstrated the high performance of ML-based models in predicting toxicity, but the application of these approaches in clinics is still lagging, partly due to their low interpretability. Therefore, an overview of contemporary research is needed in order to familiarize practitioners with common methods and strategies. Here, we present a review of ML-based models for predicting and classifying RT-induced complications from both a methodological and a clinical standpoint, focusing on the type of features considered, the ML methods used, and the main results achieved. Our work overviews published research in multiple cancer sites, including brain, breast, esophagus, gynecological, head and neck, liver, lung, and prostate cancers. The aim is to define the current state of the art and main achievements within the field for both researchers and clinicians
    • …
    corecore