3,560 research outputs found

    Simple and Effective Visual Models for Gene Expression Cancer Diagnostics

    Get PDF
    In the paper we show that diagnostic classes in cancer gene expression data sets, which most often include thousands of features (genes), may be effectively separated with simple two-dimensional plots such as scatterplot and radviz graph. The principal innovation proposed in the paper is a method called VizRank, which is able to score and identify the best among possibly millions of candidate projections for visualizations. Compared to recently much applied techniques in the field of cancer genomics that include neural networks, support vector machines and various ensemble-based approaches, VizRank is fast and finds visualization models that can be easily examined and interpreted by domain experts. Our experiments on a number of gene expression data sets show that VizRank was always able to find data visualizations with a small number of (two to seven) genes and excellent class separation. In addition to providing grounds for gene expression cancer diagnosis, VizRank and its visualizations also identify small sets of relevant genes, uncover interesting gene interactions and point to outliers and potential misclassifications in cancer data sets

    PaperRobot: Incremental Draft Generation of Scientific Ideas

    Full text link
    We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively.Comment: 12 pages. Accepted by ACL 2019 Code and resource is available at https://github.com/EagleW/PaperRobo

    Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives

    Get PDF
    Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention

    Artificial intelligence in histopathology image analysis for cancer precision medicine

    Get PDF
    In recent years, there have been rapid advancements in the field of computational pathology. This has been enabled through the adoption of digital pathology workflows that generate digital images of histopathological slides, the publication of large data sets of these images and improvements in computing infrastructure. Objectives in computational pathology can be subdivided into two categories, first the automation of routine workflows that would otherwise be performed by pathologists and second the addition of novel capabilities. This thesis focuses on the development, application, and evaluation of methods in this second category, specifically the prediction of gene expression from pathology images and the registration of pathology images among each other. In Study I, we developed a computationally efficient cluster-based technique to perform transcriptome-wide predictions of gene expression in prostate cancer from H&E-stained whole-slide-images (WSIs). The suggested method outperforms several baseline methods and is non-inferior to single-gene CNN predictions, while reducing the computational cost with a factor of approximately 300. We included 15,586 transcripts that encode proteins in the analysis and predicted their expression with different modelling approaches from the WSIs. In a cross-validation, 6,618 of these predictions were significantly associated with the RNA-seq expression estimates with FDR-adjusted p-values <0.001. Upon validation of these 6,618 expression predictions in a held-out test set, the association could be confirmed for 5,419 (81.9%). Furthermore, we demonstrated that it is feasible to predict the prognostic cell-cycle progression score with a Spearman correlation to the RNA-seq score of 0.527 [0.357, 0.665]. The objective of Study II is the investigation of attention layers in the context of multiple-instance-learning for regression tasks, exemplified by a simulation study and gene expression prediction. We find that for gene expression prediction, the compared methods are not distinguishable regarding their performance, which indicates that attention mechanisms may not be superior to weakly supervised learning in this context. Study III describes the results of the ACROBAT 2022 WSI registration challenge, which we organised in conjunction with the MICCAI 2022 conference. Participating teams were ranked on the median 90th percentile of distances between registered and annotated target landmarks. Median 90th percentiles for eight teams that were eligible for ranking in the test set consisting of 303 WSI pairs ranged from 60.1 µm to 15,938.0 µm. The best performing method therefore has a score slightly below the median 90th percentile of distances between first and second annotator of 67.0 µm. Study IV describes the data set that we published to facilitate the ACROBAT challenge. The data set is available publicly through the Swedish National Data Service SND and consists of 4,212 WSIs from 1,153 breast cancer patients. Study V is an example of the application of WSI registration for computational pathology. In this study, we investigate the possibility to register invasive cancer annotations from H&E to KI67 WSIs and then subsequently train cancer detection models. To this end, we compare the performance of models optimised with registered annotations to the performance of models that were optimised with annotations generated for the KI67 WSIs. The data set consists of 272 female breast cancer cases, including an internal test set of 54 cases. We find that in this test set, the performance of both models is not distinguishable regarding performance, while there are small differences in model calibration

    Rank discriminants for predicting phenotypes from RNA expression

    Get PDF
    Statistical methods for analyzing large-scale biomolecular data are commonplace in computational biology. A notable example is phenotype prediction from gene expression data, for instance, detecting human cancers, differentiating subtypes and predicting clinical outcomes. Still, clinical applications remain scarce. One reason is that the complexity of the decision rules that emerge from standard statistical learning impedes biological understanding, in particular, any mechanistic interpretation. Here we explore decision rules for binary classification utilizing only the ordering of expression among several genes; the basic building blocks are then two-gene expression comparisons. The simplest example, just one comparison, is the TSP classifier, which has appeared in a variety of cancer-related discovery studies. Decision rules based on multiple comparisons can better accommodate class heterogeneity, and thereby increase accuracy, and might provide a link with biological mechanism. We consider a general framework ("rank-in-context") for designing discriminant functions, including a data-driven selection of the number and identity of the genes in the support ("context"). We then specialize to two examples: voting among several pairs and comparing the median expression in two groups of genes. Comprehensive experiments assess accuracy relative to other, more complex, methods, and reinforce earlier observations that simple classifiers are competitive.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS738 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Artificial intelligence in cancer imaging: Clinical challenges and applications

    Get PDF
    Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care

    Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile

    Get PDF
    While medical imaging and general pathology are routine in cancer diagnosis, genetic sequencing is not always assessable due to the strong phenotypic and genetic heterogeneity of human cancers. Image-genomics integrates medical imaging and genetics to provide a complementary approach to optimise cancer diagnosis by associating tumour imaging traits with clinical data and has demonstrated its potential in identifying imaging surrogates for tumour biomarkers. However, existing image-genomics research has focused on quantifying tumour visual traits according to human understanding, which may not be optimal across different cancer types. The challenge hence lies in the extraction of optimised imaging representations in an objective data-driven manner. Such an approach requires large volumes of annotated image data that are difficult to acquire. We propose a deep domain adaptation learning framework for associating image features to tumour genetic information, exploiting the ability of domain adaptation technique to learn relevant image features from close knowledge domains. Our proposed framework leverages the current state-of-the-art in image object recognition to provide image features to encode subtle variations of tumour phenotypic characteristics with domain adaptation techniques. The proposed framework was evaluated with current state-of-the-art in: (i) tumour histopathology image classification and; (ii) image-genomics associations. The proposed framework demonstrated improved accuracy of tumour classification, as well as providing additional data-derived representations of tumour phenotypic characteristics that exhibit strong image-genomics association. This thesis advances and indicates the potential of image-genomics research to reveal additional imaging surrogates to genetic biomarkers, which has the potential to facilitate cancer diagnosis
    • …
    corecore