170 research outputs found

    Liver Segmentation and Liver Cancer Detection Based on Deep Convolutional Neural Network: A Brief Bibliometric Survey

    Get PDF
    Background: This study analyzes liver segmentation and cancer detection work, with the perspectives of machine learning and deep learning and different image processing techniques from the year 2012 to 2020. The study uses different Bibliometric analysis methods. Methods: The articles on the topic were obtained from one of the most popular databases- Scopus. The year span for the analysis is considered to be from 2012 to 2020. Scopus analyzer facilitates the analysis of the databases with different categories such as documents by source, year, and county and so on. Analysis is also done by using different units of analysis such as co-authorship, co-occurrences, citation analysis etc. For this analysis Vosviewer Version 1.6.15 is used. Results: In the study, a total of 518 articles on liver segmentation and liver cancer were obtained between the years 2012 to 2020. From the statistical analysis and network analysis it can be concluded that, the maximum articles are published in the year 2020 with China is the highest contributor followed by United States and India. Conclusions: Outcome from Scoups database is 518 articles with English language has the largest number of articles. Statistical analysis is done in terms of different parameters such as Authors, documents, country, affiliation etc. The analysis clearly indicates the potential of the topic. Network analysis of different parameters is also performed. This also indicate that there is a lot of scope for further research in terms of advanced algorithms of computer vision, deep learning and machine learning

    Artificial Intelligence in the Diagnosis of Hepatocellular Carcinoma: A Systematic Review.

    Get PDF
    Hepatocellular carcinoma ranks fifth amongst the most common malignancies and is the third most common cause of cancer-related death globally. Artificial Intelligence is a rapidly growing field of interest. Following the PRISMA reporting guidelines, we conducted a systematic review to retrieve articles reporting the application of AI in HCC detection and characterization. A total of 27 articles were included and analyzed with our composite score for the evaluation of the quality of the publications. The contingency table reported a statistically significant constant improvement over the years of the total quality score (p = 0.004). Different AI methods have been adopted in the included articles correlated with 19 articles studying CT (41.30%), 20 studying US (43.47%), and 7 studying MRI (15.21%). No article has discussed the use of artificial intelligence in PET and X-ray technology. Our systematic approach has shown that previous works in HCC detection and characterization have assessed the comparability of conventional interpretation with machine learning using US, CT, and MRI. The distribution of the imaging techniques in our analysis reflects the usefulness and evolution of medical imaging for the diagnosis of HCC. Moreover, our results highlight an imminent need for data sharing in collaborative data repositories to minimize unnecessary repetition and wastage of resources

    Deep learning for image-based liver analysis — A comprehensive review focusing on malignant lesions

    Get PDF
    Deep learning-based methods, in particular, convolutional neural networks and fully convolutional networks are now widely used in the medical image analysis domain. The scope of this review focuses on the analysis using deep learning of focal liver lesions, with a special interest in hepatocellular carcinoma and metastatic cancer; and structures like the parenchyma or the vascular system. Here, we address several neural network architectures used for analyzing the anatomical structures and lesions in the liver from various imaging modalities such as computed tomography, magnetic resonance imaging and ultrasound. Image analysis tasks like segmentation, object detection and classification for the liver, liver vessels and liver lesions are discussed. Based on the qualitative search, 91 papers were filtered out for the survey, including journal publications and conference proceedings. The papers reviewed in this work are grouped into eight categories based on the methodologies used. By comparing the evaluation metrics, hybrid models performed better for both the liver and the lesion segmentation tasks, ensemble classifiers performed better for the vessel segmentation tasks and combined approach performed better for both the lesion classification and detection tasks. The performance was measured based on the Dice score for the segmentation, and accuracy for the classification and detection tasks, which are the most commonly used metrics.publishedVersio

    AI-basierte volumetrische Analyse der Lebermetastasenlast bei Patienten mit neuroendokrinen Neoplasmen (NEN)

    Get PDF
    Background: Quantification of liver tumor load in patients with liver metastases from neuroendocrine neoplasms is essential for therapeutic management. However, accurate measurement of three-dimensional (3D) volumes is time-consuming and difficult to achieve. Even though the common criteria for assessing treatment response have simplified the measurement of liver metastases, the workload of following up patients with neuroendocrine liver metastases (NELMs) remains heavy for radiologists due to their increased morbidity and prolonged survival. Among the many imaging methods, gadoxetic acid (Gd-EOB)-enhanced magnetic resonance imaging (MRI) has shown the highest accuracy. Methods: 3D-volumetric segmentation of NELM and livers were manually performed in 278 Gd-EOB MRI scans from 118 patients. Eighty percent (222 scans) of them were randomly divided into training datasets and the other 20% (56 scans) were internal validation datasets. An additional 33 patients from a different time period, who underwent Gd-EOB MRI at both baseline and 12-month follow-up examinations, were collected for external and clinical validation (n = 66). Model measurement results (NELM volume; hepatic tumor load (HTL)) and the respective absolute (ΔabsNELM; ΔabsHTL) and relative changes (ΔrelNELM; ΔrelHTL) for baseline and follow-up-imaging were used and correlated with multidisciplinary cancer conferences (MCC) decisions (treatment success/failure). Three readers manually segmented MRI images of each slice, blinded to clinical data and independently. All images were reviewed by another senior radiologist. Results: The model’s performance showed high accuracy between NELM and liver in both internal and external validation (Matthew’s correlation coefficient (ϕ): 0.76/0.95, 0.80/0.96, respectively). And in internal validation dataset, the group with higher NELM volume (> 16.17 cm3) showed higher ϕ than the group with lower NELM volume (ϕ = 0.80 vs. 0.71; p = 0.0025). In the external validation dataset, all response variables (∆absNELM; ∆absHTL; ∆relNELM; ∆relHTL) reflected significant differences across MCC decision groups (all p < 0.001). The AI model correctly detected the response trend based on ∆relNELM and ∆relHTL in all the 33 MCC patients and showed the optimal discrimination between treatment success and failure at +56.88% and +57.73%, respectively (AUC: 1.000; P < 0.001). Conclusions: The created AI-based segmentation model performed well in the three-dimensional quantification of NELMs and HTL in Gd-EOB-MRI. Moreover, the model showed good agreement with the evaluation of treatment response of the MCC’s decision.Hintergrund: Die Quantifizierung der Lebertumorlast bei Patienten mit Lebermetastasen von neuroendokrinen Neoplasien ist für die Behandlung unerlässlich. Eine genaue Messung des dreidimensionalen (3D) Volumens ist jedoch zeitaufwändig und schwer zu erreichen. Obwohl standardisierte Kriterien für die Beurteilung des Ansprechens auf die Behandlung die Messung von Lebermetastasen vereinfacht haben, bleibt die Arbeitsbelastung für Radiologen bei der Nachbeobachtung von Patienten mit neuroendokrinen Lebermetastasen (NELMs) aufgrund der höheren Fallzahlen durch erhöhte Morbidität und verlängerter Überlebenszeit hoch. Unter den zahlreichen bildgebenden Verfahren hat die Gadoxetsäure (Gd-EOB)-verstärkte Magnetresonanztomographie (MRT) die höchste Genauigkeit gezeigt. Methoden: Manuelle 3D-Segmentierungen von NELM und Lebern wurden in 278 Gd-EOB-MRT-Scans von 118 Patienten durchgeführt. 80% (222 Scans) davon wurden nach dem Zufallsprinzip in den Trainingsdatensatz eingeteilt, die übrigen 20% (56 Scans) waren interne Validierungsdatensätze. Zur externen und klinischen Validierung (n = 66) wurden weitere 33 Patienten aus einer späteren Zeitspanne des Multidisziplinäre Krebskonferenzen (MCC) erfasst, welche sich sowohl bei der Erstuntersuchung als auch bei der Nachuntersuchung nach 12 Monaten einer Gd-EOB-MRT unterzogen hatten. Die Messergebnisse des Modells (NELM-Volumen; hepatische Tumorlast (HTL)) mit den entsprechenden absoluten (ΔabsNELM; ΔabsHTL) und relativen Veränderungen (ΔrelNELM; ΔrelHTL) bei der Erstuntersuchung und der Nachuntersuchung wurden zum Vergleich mit MCC-Entscheidungen (Behandlungserfolg/-versagen) herangezogen. Drei Leser segmentierten die MRT-Bilder jeder Schicht manuell, geblindet und unabhängig. Alle Bilder wurden von einem weiteren Radiologen überprüft. Ergebnisse: Die Leistung des Modells zeigte sowohl bei der internen als auch bei der externen Validierung eine hohe Genauigkeit zwischen NELM und Leber (Matthew's Korrelationskoeffizient (ϕ): 0,76/0,95 bzw. 0,80/0,96). Und im internen Validierungsdatensatz zeigte die Gruppe mit höherem NELM-Volumen (> 16,17 cm3) einen höheren ϕ als die Gruppe mit geringerem NELM-Volumen (ϕ = 0,80 vs. 0,71; p = 0,0025). Im externen Validierungsdatensatz wiesen alle Antwortvariablen (∆absNELM; ∆absHTL; ∆relNELM; ∆relHTL) signifikante Unterschiede zwischen den MCC-Entscheidungsgruppen auf (alle p < 0,001). Das KI-Modell erkannte das Therapieansprechen auf der Grundlage von ∆relNELM und ∆relHTL bei allen 33 MCC-Patienten korrekt und zeigte bei +56,88% bzw. +57,73% eine optimale Unterscheidung zwischen Behandlungserfolg und -versagen (AUC: 1,000; P < 0,001). Schlussfolgerungen: Das Modell zeigte eine hohe Genauigkeit bei der dreidimensionalen Quantifizierung des NELMs-Volumens und der HTL in der Gd-EOB-MRT. Darüber hinaus zeigte das Modell eine gute Übereinstimmung bei der Bewertung des Ansprechens auf die Behandlung mit der Entscheidung des Tumorboards

    AI in Medical Imaging Informatics: Current Challenges and Future Directions

    Get PDF
    This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation

    Full text link
    Image segmentation is a fundamental and challenging problem in computer vision with applications spanning multiple areas, such as medical imaging, remote sensing, and autonomous vehicles. Recently, convolutional neural networks (CNNs) have gained traction in the design of automated segmentation pipelines. Although CNN-based models are adept at learning abstract features from raw image data, their performance is dependent on the availability and size of suitable training datasets. Additionally, these models are often unable to capture the details of object boundaries and generalize poorly to unseen classes. In this thesis, we devise novel methodologies that address these issues and establish robust representation learning frameworks for fully-automatic semantic segmentation in medical imaging and mainstream computer vision. In particular, our contributions include (1) state-of-the-art 2D and 3D image segmentation networks for computer vision and medical image analysis, (2) an end-to-end trainable image segmentation framework that unifies CNNs and active contour models with learnable parameters for fast and robust object delineation, (3) a novel approach for disentangling edge and texture processing in segmentation networks, and (4) a novel few-shot learning model in both supervised settings and semi-supervised settings where synergies between latent and image spaces are leveraged to learn to segment images given limited training data.Comment: PhD dissertation, UCLA, 202

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    A new biomarker combining multimodal MRI radiomics and clinical indicators for differentiating inverted papilloma from nasal polyp invaded the olfactory nerve possibly

    Get PDF
    Background and purposeInverted papilloma (IP) and nasal polyp (NP), as two benign lesions, are difficult to distinguish on MRI imaging and clinically, especially in predicting whether the olfactory nerve is damaged, which is an important aspect of treatment and prognosis. We plan to establish a new biomarker to distinguish IP and NP that may invade the olfactory nerve, and to analyze its diagnostic efficacy.Materials and methodsA total of 74 cases of IP and 55 cases of NP were collected. A total of 80% of 129 patients were used as the training set (59 IP and 44 NP); the remaining were used as the testing set. As a multimodal study (two MRI sequences and clinical indicators), preoperative MR images including T2-weighted magnetic resonance imaging (T2-WI) and contrast-enhanced T1-weighted magnetic resonance imaging (CE-T1WI) were collected. Radiomic features were extracted from MR images. Then, the least absolute shrinkage and selection operator (LASSO) regression method was used to decrease the high degree of redundancy and irrelevance. Subsequently, the radiomics model is constructed by the rad scoring formula. The area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the model have been calculated. Finally, the decision curve analysis (DCA) is used to evaluate the clinical practicability of the model.ResultsThere were significant differences in age, nasal bleeding, and hyposmia between the two lesions (p &lt; 0.05). In total, 1,906 radiomic features were extracted from T2-WI and CE-T1WI images. After feature selection, using 12 key features to bulid model. AUC, sensitivity, specificity, and accuracy on the testing cohort of the optimal model were, respectively, 0.9121, 0.828, 0.9091, and 0.899. AUC on the testing cohort of the optimal model was 0.9121; in addition, sensitivity, specificity, and accuracy were, respectively, 0.828, 0.9091, and 0.899.ConclusionA new biomarker combining multimodal MRI radiomics and clinical indicators can effectively distinguish between IP and NP that may invade the olfactory nerve, which can provide a valuable decision basis for individualized treatment
    corecore