21 research outputs found

    Dictionary learning LASSO for feature selection with application to hepatocellular carcinoma grading using contrast enhanced magnetic resonance imaging

    Get PDF
    IntroductionThe successful use of machine learning (ML) for medical diagnostic purposes has prompted myriad applications in cancer image analysis. Particularly for hepatocellular carcinoma (HCC) grading, there has been a surge of interest in ML-based selection of the discriminative features from high-dimensional magnetic resonance imaging (MRI) radiomics data. As one of the most commonly used ML-based selection methods, the least absolute shrinkage and selection operator (LASSO) has high discriminative power of the essential feature based on linear representation between input features and output labels. However, most LASSO methods directly explore the original training data rather than effectively exploiting the most informative features of radiomics data for HCC grading. To overcome this limitation, this study marks the first attempt to propose a feature selection method based on LASSO with dictionary learning, where a dictionary is learned from the training features, using the Fisher ratio to maximize the discriminative information in the feature.MethodsThis study proposes a LASSO method with dictionary learning to ensure the accuracy and discrimination of feature selection. Specifically, based on the Fisher ratio score, each radiomic feature is classified into two groups: the high-information and the low-information group. Then, a dictionary is learned through an optimal mapping matrix to enhance the high-information part and suppress the low discriminative information for the task of HCC grading. Finally, we select the most discrimination features according to the LASSO coefficients based on the learned dictionary.Results and discussionThe experimental results based on two classifiers (KNN and SVM) showed that the proposed method yielded accuracy gains, compared favorably with another 5 state-of-the-practice feature selection methods

    CT images-based 3D convolutional neural network to predict early recurrence of solitary hepatocellular carcinoma after radical hepatectomy

    Get PDF
    PURPOSEThe high rate of recurrence of hepatocellular carcinoma (HCC) after radical hepatectomy is an important factor that affects the long-term survival of patients. This study aimed to develop a computed tomography (CT) images-based 3-dimensional (3D) convolutional neural network (CNN) for the preoperative prediction of early recurrence (ER) (≤2 years) after radical hepatectomy in patients with solitary HCC and to compare the effects of segmentation sampling (SS) and non-segmentation sampling (NSS) on the prediction performance of 3D-CNN.METHODSContrast-enhanced CT images of 220 HCC patients were used in this study (training group=178 and test group=42). We used SS and NSS to select the volume-of-interest to train SS-3D-CNN and NSS-3D-CNN separately. The prediction accuracy was evaluated using the test group. Finally, gradient-weighted class activation mappings (Grad-CAMs) were plotted to analyze the difference of prediction logic between the SS-3D-CNN and NSS-3D-CNN.RESULTSThe areas under the receiver operating characteristic curves (AUCs) of the SS-3D-CNN and NSS3D-CNN in the training group were 0.824 (95% CI: 0.764-0.885) and 0.868 (95% CI: 0.815-0.921). The AUC of the SS-3D-CNN and NSS-3D-CNN in the test group were 0.789 (95% CI: 0.637-0.941) and 0.560 (95% CI: 0.378-0.742). The SS-3D-CNN could stratify patients into low- and high-risk groups, with significant differences in recurrence-free survival (RFS) (P < .001). But NSS-3D-CNN could not effectively stratify them in the test group. According to the Grad-CAMs, compared with SS-3D-CNN, NSS-3D-CNN was obviously interfered by the nearby tissues.CONCLUSIONSS-3D-CNN may be of clinical use for identifying high-risk patients and formulating individualized treatment and follow-up strategies. SS is better than NSS in improving the performance of 3D-CNN in our study

    Artificial Intelligence to Detect Papilledema from Ocular Fundus Photographs.

    Get PDF
    BACKGROUND: Nonophthalmologist physicians do not confidently perform direct ophthalmoscopy. The use of artificial intelligence to detect papilledema and other optic-disk abnormalities from fundus photographs has not been well studied. METHODS: We trained, validated, and externally tested a deep-learning system to classify optic disks as being normal or having papilledema or other abnormalities from 15,846 retrospectively collected ocular fundus photographs that had been obtained with pharmacologic pupillary dilation and various digital cameras in persons from multiple ethnic populations. Of these photographs, 14,341 from 19 sites in 11 countries were used for training and validation, and 1505 photographs from 5 other sites were used for external testing. Performance at classifying the optic-disk appearance was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard of clinical diagnoses by neuro-ophthalmologists. RESULTS: The training and validation data sets from 6779 patients included 14,341 photographs: 9156 of normal disks, 2148 of disks with papilledema, and 3037 of disks with other abnormalities. The percentage classified as being normal ranged across sites from 9.8 to 100%; the percentage classified as having papilledema ranged across sites from zero to 59.5%. In the validation set, the system discriminated disks with papilledema from normal disks and disks with nonpapilledema abnormalities with an AUC of 0.99 (95% confidence interval [CI], 0.98 to 0.99) and normal from abnormal disks with an AUC of 0.99 (95% CI, 0.99 to 0.99). In the external-testing data set of 1505 photographs, the system had an AUC for the detection of papilledema of 0.96 (95% CI, 0.95 to 0.97), a sensitivity of 96.4% (95% CI, 93.9 to 98.3), and a specificity of 84.7% (95% CI, 82.3 to 87.1). CONCLUSIONS: A deep-learning system using fundus photographs with pharmacologically dilated pupils differentiated among optic disks with papilledema, normal disks, and disks with nonpapilledema abnormalities. (Funded by the Singapore National Medical Research Council and the SingHealth Duke-NUS Ophthalmology and Visual Sciences Academic Clinical Program.)

    Medical Image Classification using Deep Learning Techniques and Uncertainty Quantification

    Get PDF
    The emergence of medical image analysis using deep learning techniques has introduced multiple challenges in terms of developing robust and trustworthy systems for automated grading and diagnosis. Several works have been presented to improve classification performance. However, these methods lack the diversity of capturing different levels of contextual information among image regions, strategies to present diversity in learning by using ensemble-based techniques, or uncertainty measures for predictions generated from automated systems. Consequently, the presented methods provide sub-optimal results which is not enough for clinical practice. To enhance classification performance and introduce trustworthiness, deep learning techniques and uncertainty quantification methods are required to provide diversity in contextual learning and the initial stage of explainability, respectively. This thesis aims to explore and develop novel deep learning techniques escorted by uncertainty quantification for developing actionable automated grading and diagnosis systems. More specifically, the thesis provides the following three main contributions. First, it introduces a novel entropy-based elastic ensemble of Deep Convolutional Neural Networks (DCNNs) architecture termed as 3E-Net for classifying grades of invasive breast carcinoma microscopic images. 3E-Net is based on a patch-wise network for feature extraction and image-wise networks for final image classification and uses an elastic ensemble based on Shannon Entropy as an uncertainty quantification method for measuring the level of randomness in image predictions. As the second contribution, the thesis presents a novel multi-level context and uncertainty-aware deep learning architecture named MCUa for the classification of breast cancer microscopic images. MCUa consists of multiple feature extractors and multi-level context-aware models in a dynamic ensemble fashion to learn the spatial dependencies among image patches and enhance the learning diversity. Also, the architecture uses Monte Carlo (MC) dropout for measuring the uncertainty of image predictions and deciding whether an input image is accurate based on the generated uncertainty score. The third contribution of the thesis introduces a novel model agnostic method (AUQantO) that establishes an actionable strategy for optimising uncertainty quantification for deep learning architectures. AUQantO method works on optimising a hyperparameter threshold, which is compared against uncertainty scores from Shannon entropy and MC-dropout. The optimal threshold is achieved based on single- and multi-objective functions which are optimised using multiple optimisation methods. A comprehensive set of experiments have been conducted using multiple medical imaging datasets and multiple novel evaluation metrics to prove the effectiveness of our three contributions to clinical practice. First, 3E-Net versions achieved an accuracy of 96.15% and 99.50% on invasive breast carcinoma dataset. The second contribution, MCUa, achieved an accuracy of 98.11% on Breast cancer histology images dataset. Lastly, AUQantO showed significant improvements in performance of the state-of-the-art deep learning models with an average accuracy improvement of 1.76% and 2.02% on Breast cancer histology images dataset and an average accuracy improvement of 5.67% and 4.24% on Skin cancer dataset using two uncertainty quantification techniques. AUQantO demonstrated the ability to generate the optimal number of excluded images in a particular dataset

    Challenges and Opportunities of End-to-End Learning in Medical Image Classification

    Get PDF
    Das Paradigma des End-to-End Lernens hat in den letzten Jahren die Bilderkennung revolutioniert, aber die klinische Anwendung hinkt hinterher. Bildbasierte computergestützte Diagnosesysteme basieren immer noch weitgehend auf hochtechnischen und domänen-spezifischen Pipelines, die aus unabhängigen regelbasierten Modellen bestehen, welche die Teilaufgaben der Bildklassifikation wiederspiegeln: Lokalisation von auffälligen Regionen, Merkmalsextraktion und Entscheidungsfindung. Das Versprechen einer überlegenen Entscheidungsfindung beim End-to-End Lernen ergibt sich daraus, dass domänenspezifische Zwangsbedingungen von begrenzter Komplexität entfernt werden und stattdessen alle Systemkomponenten gleichzeitig, direkt anhand der Rohdaten, und im Hinblick auf die letztendliche Aufgabe optimiert werden. Die Gründe dafür, dass diese Vorteile noch nicht den Weg in die Klinik gefunden haben, d.h. die Herausforderungen, die sich bei der Entwicklung Deep Learning-basierter Diagnosesysteme stellen, sind vielfältig: Die Tatsache, dass die Generalisierungsfähigkeit von Lernalgorithmen davon abhängt, wie gut die verfügbaren Trainingsdaten die tatsächliche zugrundeliegende Datenverteilung abbilden, erweist sich in medizinische Anwendungen als tiefgreifendes Problem. Annotierte Datensätze in diesem Bereich sind notorisch klein, da für die Annotation eine kostspielige Beurteilung durch Experten erforderlich ist und die Zusammenlegung kleinerer Datensätze oft durch Datenschutzauflagen und Patientenrechte erschwert wird. Darüber hinaus weisen medizinische Datensätze drastisch unterschiedliche Eigenschaften im Bezug auf Bildmodalitäten, Bildgebungsprotokolle oder Anisotropien auf, und die oft mehrdeutige Evidenz in medizinischen Bildern kann sich auf inkonsistente oder fehlerhafte Trainingsannotationen übertragen. Während die Verschiebung von Datenverteilungen zwischen Forschungsumgebung und Realität zu einer verminderten Modellrobustheit führt und deshalb gegenwärtig als das Haupthindernis für die klinische Anwendung von Lernalgorithmen angesehen wird, wird dieser Graben oft noch durch Störfaktoren wie Hardwarelimitationen oder Granularität von gegebenen Annotation erweitert, die zu Diskrepanzen zwischen der modellierten Aufgabe und der zugrunde liegenden klinischen Fragestellung führen. Diese Arbeit untersucht das Potenzial des End-to-End-Lernens in klinischen Diagnosesystemen und präsentiert Beiträge zu einigen der wichtigsten Herausforderungen, die derzeit eine breite klinische Anwendung verhindern. Zunächst wird der letzten Teil der Klassifikations-Pipeline untersucht, die Kategorisierung in klinische Pathologien. Wir demonstrieren, wie das Ersetzen des gegenwärtigen klinischen Standards regelbasierter Entscheidungen durch eine groß angelegte Merkmalsextraktion gefolgt von lernbasierten Klassifikatoren die Brustkrebsklassifikation im MRT signifikant verbessert und eine Leistung auf menschlichem Level erzielt. Dieser Ansatz wird weiter anhand von kardiologischer Diagnose gezeigt. Zweitens ersetzen wir, dem Paradigma des End-to-End Lernens folgend, das biophysikalische Modell, das für die Bildnormalisierung in der MRT angewandt wird, sowie die Extraktion handgefertigter Merkmale, durch eine designierte CNN-Architektur und liefern eine eingehende Analyse, die das verborgene Potenzial der gelernten Bildnormalisierung und einen Komplementärwert der gelernten Merkmale gegenüber den handgefertigten Merkmalen aufdeckt. Während dieser Ansatz auf markierten Regionen arbeitet und daher auf manuelle Annotation angewiesen ist, beziehen wir im dritten Teil die Aufgabe der Lokalisierung dieser Regionen in den Lernprozess ein, um eine echte End-to-End-Diagnose baserend auf den Rohbildern zu ermöglichen. Dabei identifizieren wir eine weitgehend vernachlässigte Zwangslage zwischen dem Streben nach der Auswertung von Modellen auf klinisch relevanten Skalen auf der einen Seite, und der Optimierung für effizientes Training unter Datenknappheit auf der anderen Seite. Wir präsentieren ein Deep Learning Modell, das zur Auflösung dieses Kompromisses beiträgt, liefern umfangreiche Experimente auf drei medizinischen Datensätzen sowie eine Serie von Toy-Experimenten, die das Verhalten bei begrenzten Trainingsdaten im Detail untersuchen, und publiziren ein umfassendes Framework, das unter anderem die ersten 3D-Implementierungen gängiger Objekterkennungsmodelle umfasst. Wir identifizieren weitere Hebelpunkte in bestehenden End-to-End-Lernsystemen, bei denen Domänenwissen als Zwangsbedingung dienen kann, um die Robustheit von Modellen in der medizinischen Bildanalyse zu erhöhen, die letztendlich dazu beitragen sollen, den Weg für die Anwendung in der klinischen Praxis zu ebnen. Zu diesem Zweck gehen wir die Herausforderung fehlerhafter Trainingsannotationen an, indem wir die Klassifizierungskompnente in der End-to-End-Objekterkennung durch Regression ersetzen, was es ermöglicht, Modelle direkt auf der kontinuierlichen Skala der zugrunde liegenden pathologischen Prozesse zu trainieren und so die Robustheit der Modelle gegenüber fehlerhaften Trainingsannotationen zu erhöhen. Weiter adressieren wir die Herausforderung der Input-Heterogenitäten, mit denen trainierte Modelle konfrontiert sind, wenn sie an verschiedenen klinischen Orten eingesetzt werden, indem wir eine modellbasierte Domänenanpassung vorschlagen, die es ermöglicht, die ursprüngliche Trainingsdomäne aus veränderten Inputs wiederherzustellen und damit eine robuste Generalisierung zu gewährleisten. Schließlich befassen wir uns mit dem höchst unsystematischen, aufwendigen und subjektiven Trial-and-Error-Prozess zum Finden von robusten Hyperparametern für einen gegebene Aufgabe, indem wir Domänenwissen in ein Set systematischer Regeln überführen, die eine automatisierte und robuste Konfiguration von Deep Learning Modellen auf einer Vielzahl von medizinischen Datensetzen ermöglichen. Zusammenfassend zeigt die hier vorgestellte Arbeit das enorme Potenzial von End-to-End Lernalgorithmen im Vergleich zum klinischen Standard mehrteiliger und hochtechnisierter Diagnose-Pipelines auf, und präsentiert Lösungsansätze zu einigen der wichtigsten Herausforderungen für eine breite Anwendung unter realen Bedienungen wie Datenknappheit, Diskrepanz zwischen der vom Modell behandelten Aufgabe und der zugrunde liegenden klinischen Fragestellung, Mehrdeutigkeiten in Trainingsannotationen, oder Verschiebung von Datendomänen zwischen klinischen Standorten. Diese Beiträge können als Teil des übergreifende Zieles der Automatisierung von medizinischer Bildklassifikation gesehen werden - ein integraler Bestandteil des Wandels, der erforderlich ist, um die Zukunft des Gesundheitswesens zu gestalten

    Case series of breast fillers and how things may go wrong: radiology point of view

    Get PDF
    INTRODUCTION: Breast augmentation is a procedure opted by women to overcome sagging breast due to breastfeeding or aging as well as small breast size. Recent years have shown the emergence of a variety of injectable materials on market as breast fillers. These injectable breast fillers have swiftly gained popularity among women, considering the minimal invasiveness of the procedure, nullifying the need for terrifying surgery. Little do they know that the procedure may pose detrimental complications, while visualization of breast parenchyma infiltrated by these fillers is also deemed substandard; posing diagnostic challenges. We present a case series of three patients with prior history of hyaluronic acid and collagen breast injections. REPORT: The first patient is a 37-year-old lady who presented to casualty with worsening shortness of breath, non-productive cough, central chest pain; associated with fever and chills for 2-weeks duration. The second patient is a 34-year-old lady who complained of cough, fever and haemoptysis; associated with shortness of breath for 1-week duration. CT in these cases revealed non thrombotic wedge-shaped peripheral air-space densities. The third patient is a 37‐year‐old female with right breast pain, swelling and redness for 2- weeks duration. Previous collagen breast injection performed 1 year ago had impeded sonographic visualization of the breast parenchyma. MRI breasts showed multiple non- enhancing round and oval shaped lesions exhibiting fat intensity. CONCLUSION: Radiologists should be familiar with the potential risks and hazards as well as limitations of imaging posed by breast fillers such that MRI is required as problem-solving tool

    Characterization of alar ligament on 3.0T MRI: a cross-sectional study in IIUM Medical Centre, Kuantan

    Get PDF
    INTRODUCTION: The main purpose of the study is to compare the normal anatomy of alar ligament on MRI between male and female. The specific objectives are to assess the prevalence of alar ligament visualized on MRI, to describe its characteristics in term of its course, shape and signal homogeneity and to find differences in alar ligament signal intensity between male and female. This study also aims to determine the association between the heights of respondents with alar ligament signal intensity and dimensions. MATERIALS & METHODS: 50 healthy volunteers were studied on 3.0T MR scanner Siemens Magnetom Spectra using 2-mm proton density, T2 and fat-suppression sequences. Alar ligament is depicted in 3 planes and the visualization and variability of the ligament courses, shapes and signal intensity characteristics were determined. The alar ligament dimensions were also measured. RESULTS: Alar ligament was best depicted in coronal plane, followed by sagittal and axial planes. The orientations were laterally ascending in most of the subjects (60%), predominantly oval in shaped (54%) and 67% showed inhomogenous signal. No significant difference of alar ligament signal intensity between male and female respondents. No significant association was found between the heights of the respondents with alar ligament signal intensity and dimensions. CONCLUSION: Employing a 3.0T MR scanner, the alar ligament is best portrayed on coronal plane, followed by sagittal and axial planes. However, tremendous variability of alar ligament as depicted in our data shows that caution needs to be exercised when evaluating alar ligament, especially during circumstances of injury
    corecore