1,132 research outputs found

    AI-driven synthetic biology for non-small cell lung cancer drug effectiveness-cost analysis in intelligent assisted medical systems

    Get PDF
    According to statistics, in the 185 countries' 36 types of cancer, the morbidity and mortality of lung cancer take the first place, and non-small cell lung cancer (NSCLC) accounts for 85% of lung cancer (International Agency for Research on Cancer, 2018), (Bray et al., 2018). Significantly in many developing countries, limited medical resources and excess population seriously affect the diagnosis and treatment of alung cancer patients. The 21st century is an era of life medicine, big data, and information technology. Synthetic biology is known as the driving force of natural product innovation and research in this era. Based on the research of NSCLC targeted drugs, through the cross-fusion of synthetic biology and artificial intelligence, using the idea of bioengineering, we construct an artificial intelligence assisted medical system and propose a drug selection framework for the personalized selection of NSCLC patients. Under the premise of ensuring the efficacy, considering the economic cost of targeted drugs as an auxiliary decision-making factor, the system predicts the drug effectiveness-cost then. The experiment shows that our method can rely on the provided clinical data to screen drug treatment programs suitable for the patient's conditions and assist doctors in making an efficient diagnosis

    Breast cancer diagnosis using a hybrid genetic algorithm for feature selection based on mutual information

    Get PDF
    Feature Selection is the process of selecting a subset of relevant features (i.e. predictors) for use in the construction of predictive models. This paper proposes a hybrid feature selection approach to breast cancer diagnosis which combines a Genetic Algorithm (GA) with Mutual Information (MI) for selecting the best combination of cancer predictors, with maximal discriminative capability. The selected features are then input into a classifier to predict whether a patient has breast cancer. Using a publicly available breast cancer dataset, experiments were performed to evaluate the performance of the Genetic Algorithm based on the Mutual Information approach with two different machine learning classifiers, namely the k-Nearest Neighbor (KNN), and Support vector machine (SVM), each tuned using different distance measures and kernel functions, respectively. The results revealed that the proposed hybrid approach is highly accurate for predicting breast cancer, and it is very promising for predicting other cancers using clinical data

    Network biomarkers, interaction networks and dynamical network biomarkers in respiratory diseases

    Get PDF
    Identification and validation of interaction networks and network biomarkers have become more critical and important in the development of disease-specific biomarkers, which are functionally changed during disease development, progression or treatment. The present review headlined the definition, significance, research and potential application for network biomarkers, interaction networks and dynamical network biomarkers (DNB). Disease-specific interaction networks, network biomarkers, or DNB have great significance in the understanding of molecular pathogenesis, risk assessment, disease classification and monitoring, or evaluations of therapeutic responses and toxicities. Protein-based DNB will provide more information to define the differences between the normal and pre-disease stages, which might point to early diagnosis for patients. Clinical bioinformatics should be a key approach to the identification and validation of disease-specific biomarkers

    Network biomarkers, interaction networks and dynamical network biomarkers in respiratory diseases

    Full text link

    Artificial intelligence for imaging in immunotherapy

    Get PDF

    Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions

    Full text link
    Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.Comment: Survey, 41 page

    AI-basierte volumetrische Analyse der Lebermetastasenlast bei Patienten mit neuroendokrinen Neoplasmen (NEN)

    Get PDF
    Background: Quantification of liver tumor load in patients with liver metastases from neuroendocrine neoplasms is essential for therapeutic management. However, accurate measurement of three-dimensional (3D) volumes is time-consuming and difficult to achieve. Even though the common criteria for assessing treatment response have simplified the measurement of liver metastases, the workload of following up patients with neuroendocrine liver metastases (NELMs) remains heavy for radiologists due to their increased morbidity and prolonged survival. Among the many imaging methods, gadoxetic acid (Gd-EOB)-enhanced magnetic resonance imaging (MRI) has shown the highest accuracy. Methods: 3D-volumetric segmentation of NELM and livers were manually performed in 278 Gd-EOB MRI scans from 118 patients. Eighty percent (222 scans) of them were randomly divided into training datasets and the other 20% (56 scans) were internal validation datasets. An additional 33 patients from a different time period, who underwent Gd-EOB MRI at both baseline and 12-month follow-up examinations, were collected for external and clinical validation (n = 66). Model measurement results (NELM volume; hepatic tumor load (HTL)) and the respective absolute (ΔabsNELM; ΔabsHTL) and relative changes (ΔrelNELM; ΔrelHTL) for baseline and follow-up-imaging were used and correlated with multidisciplinary cancer conferences (MCC) decisions (treatment success/failure). Three readers manually segmented MRI images of each slice, blinded to clinical data and independently. All images were reviewed by another senior radiologist. Results: The model’s performance showed high accuracy between NELM and liver in both internal and external validation (Matthew’s correlation coefficient (ϕ): 0.76/0.95, 0.80/0.96, respectively). And in internal validation dataset, the group with higher NELM volume (> 16.17 cm3) showed higher ϕ than the group with lower NELM volume (ϕ = 0.80 vs. 0.71; p = 0.0025). In the external validation dataset, all response variables (∆absNELM; ∆absHTL; ∆relNELM; ∆relHTL) reflected significant differences across MCC decision groups (all p < 0.001). The AI model correctly detected the response trend based on ∆relNELM and ∆relHTL in all the 33 MCC patients and showed the optimal discrimination between treatment success and failure at +56.88% and +57.73%, respectively (AUC: 1.000; P < 0.001). Conclusions: The created AI-based segmentation model performed well in the three-dimensional quantification of NELMs and HTL in Gd-EOB-MRI. Moreover, the model showed good agreement with the evaluation of treatment response of the MCC’s decision.Hintergrund: Die Quantifizierung der Lebertumorlast bei Patienten mit Lebermetastasen von neuroendokrinen Neoplasien ist für die Behandlung unerlässlich. Eine genaue Messung des dreidimensionalen (3D) Volumens ist jedoch zeitaufwändig und schwer zu erreichen. Obwohl standardisierte Kriterien für die Beurteilung des Ansprechens auf die Behandlung die Messung von Lebermetastasen vereinfacht haben, bleibt die Arbeitsbelastung für Radiologen bei der Nachbeobachtung von Patienten mit neuroendokrinen Lebermetastasen (NELMs) aufgrund der höheren Fallzahlen durch erhöhte Morbidität und verlängerter Überlebenszeit hoch. Unter den zahlreichen bildgebenden Verfahren hat die Gadoxetsäure (Gd-EOB)-verstärkte Magnetresonanztomographie (MRT) die höchste Genauigkeit gezeigt. Methoden: Manuelle 3D-Segmentierungen von NELM und Lebern wurden in 278 Gd-EOB-MRT-Scans von 118 Patienten durchgeführt. 80% (222 Scans) davon wurden nach dem Zufallsprinzip in den Trainingsdatensatz eingeteilt, die übrigen 20% (56 Scans) waren interne Validierungsdatensätze. Zur externen und klinischen Validierung (n = 66) wurden weitere 33 Patienten aus einer späteren Zeitspanne des Multidisziplinäre Krebskonferenzen (MCC) erfasst, welche sich sowohl bei der Erstuntersuchung als auch bei der Nachuntersuchung nach 12 Monaten einer Gd-EOB-MRT unterzogen hatten. Die Messergebnisse des Modells (NELM-Volumen; hepatische Tumorlast (HTL)) mit den entsprechenden absoluten (ΔabsNELM; ΔabsHTL) und relativen Veränderungen (ΔrelNELM; ΔrelHTL) bei der Erstuntersuchung und der Nachuntersuchung wurden zum Vergleich mit MCC-Entscheidungen (Behandlungserfolg/-versagen) herangezogen. Drei Leser segmentierten die MRT-Bilder jeder Schicht manuell, geblindet und unabhängig. Alle Bilder wurden von einem weiteren Radiologen überprüft. Ergebnisse: Die Leistung des Modells zeigte sowohl bei der internen als auch bei der externen Validierung eine hohe Genauigkeit zwischen NELM und Leber (Matthew's Korrelationskoeffizient (ϕ): 0,76/0,95 bzw. 0,80/0,96). Und im internen Validierungsdatensatz zeigte die Gruppe mit höherem NELM-Volumen (> 16,17 cm3) einen höheren ϕ als die Gruppe mit geringerem NELM-Volumen (ϕ = 0,80 vs. 0,71; p = 0,0025). Im externen Validierungsdatensatz wiesen alle Antwortvariablen (∆absNELM; ∆absHTL; ∆relNELM; ∆relHTL) signifikante Unterschiede zwischen den MCC-Entscheidungsgruppen auf (alle p < 0,001). Das KI-Modell erkannte das Therapieansprechen auf der Grundlage von ∆relNELM und ∆relHTL bei allen 33 MCC-Patienten korrekt und zeigte bei +56,88% bzw. +57,73% eine optimale Unterscheidung zwischen Behandlungserfolg und -versagen (AUC: 1,000; P < 0,001). Schlussfolgerungen: Das Modell zeigte eine hohe Genauigkeit bei der dreidimensionalen Quantifizierung des NELMs-Volumens und der HTL in der Gd-EOB-MRT. Darüber hinaus zeigte das Modell eine gute Übereinstimmung bei der Bewertung des Ansprechens auf die Behandlung mit der Entscheidung des Tumorboards

    Artificial Intelligence Analysis of Gene Expression Predicted the Overall Survival of Mantle Cell Lymphoma and a Large Pan-Cancer Series

    Get PDF
    Mantle cell lymphoma (MCL) is a subtype of mature B-cell non-Hodgkin lymphoma characterized by a poor prognosis. First, we analyzed a series of 123 cases (GSE93291). An algorithm using multilayer perceptron artificial neural network, radial basis function, gene set enrichment analysis (GSEA), and conventional statistics, correlated 20,862 genes with 28 MCL prognostic genes for dimensionality reduction, to predict the patients’ overall survival and highlight new markers. As a result, 58 genes predicted survival with high accuracy (area under the curve = 0.9). Further reduction identified 10 genes: KIF18A, YBX3, PEMT, GCNA, and POGLUT3 that associated with a poor survival; and SELENOP, AMOTL2, IGFBP7, KCTD12, and ADGRG2 with a favorable survival. Correlation with the proliferation index (Ki67) was also made. Interestingly, these genes, which were related to cell cycle, apoptosis, and metabolism, also predicted the survival of diffuse large B-cell lymphoma (GSE10846, n = 414), and a pan-cancer series of The Cancer Genome Atlas (TCGA, n = 7289), which included the most relevant cancers (lung, breast, colorectal, prostate, stomach, liver, etcetera). Secondly, survival was predicted using 10 oncology panels (transcriptome, cancer progression and pathways, metabolic pathways, immuno-oncology, and host response), and TYMS was highlighted. Finally, using machine learning, C5 tree and Bayesian network had the highest accuracy for prediction and correlation with the LLMPP MCL35 proliferation assay and RGS1 was made. In conclusion, artificial intelligence analysis predicted the overall survival of MCL with high accuracy, and highlighted genes that predicted the survival of a large pan-cancer series

    Prostate Cancer Tissue Biomarkers

    Get PDF

    Multimodal Data Fusion and Quantitative Analysis for Medical Applications

    Get PDF
    Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors
    • …
    corecore