774 research outputs found

    Computational models and approaches for lung cancer diagnosis

    Full text link
    The success of treatment of patients with cancer depends on establishing an accurate diagnosis. To this end, the aim of this study is to developed novel lung cancer diagnostic models. New algorithms are proposed to analyse the biological data and extract knowledge that assists in achieving accurate diagnosis results

    Learning predictive models from temporal three-way data using triclustering: applications in clinical data analysis

    Get PDF
    Tese de mestrado, Ciência de Dados, Universidade de Lisboa, Faculdade de Ciências, 2020O conceito de triclustering estende o conceito de biclustering para um espaço tridimensional, cujo o objetivo é encontrar subespaços coerentes em dados tridimensionais. Considerando dados com dimensão temporal, a necessidade de aprender padrões temporais interessantes e usá-los para aprender modelos preditivos efetivos e interpretáveis, despoleta necessidade em investigar novas metodologias para análise de dados tridimensionais. Neste trabalho, propomos duas metodologias para esse efeito. Na primeira metodologia, encontramos os melhores parâmetros a serem usados em triclustering para descobrir os melhores triclusters (conjuntos de objetos com um padrão coerente ao longo de um dado conjunto de pontos temporais) para que depois estes padrões sejam usados como features por um dos mais apropriados classificadores encontrados na literatura. Neste caso, propomos juntar o classificador com uma abordagem de triclustering temporal. Para isso, idealizámos um algoritmo de triclustering com uma restrição temporal, denominado TCtriCluster para desvendar triclusters temporalmente contínuos (constituídos por pontos temporais contínuos). Na segunda metodologia, adicionámos uma fase de biclustering para descobrir padrões nos dados estáticos (dados que não mudam ao longo do tempo) e juntá-los aos triclusters para melhorar o desempenho e a interpretabilidade dos modelos. Estas metodologias foram usadas para prever a necessidade de administração de ventilação não invasiva (VNI) em pacientes com Esclerose Lateral Amiotrófica (ELA). Neste caso de estudo, aprendemos modelos de prognóstico geral, para os dados de todos os pacientes, e modelos especializados, depois de feita uma estratificação dos pacientes em 3 grupos de progressão: Lentos, Neutros e Rápidos. Os resultados demonstram que, além de serem bastante equiparáveis e por vezes superiores quando comparados com os resultados obtidos por um classificador de alto desempenho (Random Forests), os nossos classificadores são capazes de refinar as previsões através das potencialidades da interpretabilidade do modelo. De facto, quando usados os triclusters (e biclusters) como previsores, estamos a promover o uso de padrões de progressão da doença altamente interpretáveis. Para além disso, quando usados para previsão de prognóstico em doentes com ELA, os nossos modelos preditivos interpretáveis desvendaram padrões clinicamente relevantes para um grupo específico de padrões de progressão da doença, ajudando os médicos a entender a elevada heterogeneidade da progressão da ELA. Os resultados mostram ainda que a restrição temporal tem impacto na melhoria da efetividade e preditividade dos modelos.Triclustering extends biclustering to the three-dimensional space, aiming to find coherent subspaces in three-way data (sets of objects described by subsets of features in a subset of contexts). When the context is time, the need to learn interesting temporal patterns and use them to learn effective and interpretable predictive models triggers the need for new research methodologies to be used in three-way data analysis. In this work, we propose two approaches to learn predictive models from three-way data: 1) a triclustering-based classifier (considering just temporal data) and 2) a mixture of biclustering (with static data) and triclustering (with temporal data). In the first approach, we find the best triclustering parameters to uncover the best triclusters (sets of objects with a coherent pattern along a set of time-points) and then use these patterns as features in a state-of-the-art classifier. In the case of temporal data, we propose to couple the classifier with a temporal triclustering approach. With this aim, we devised a temporally constrained triclustering algorithm, termed TCtriCluster algorithm to mine time-contiguous triclusters. In the second approach, we extended the triclustering-based classifier with a biclustering task, where biclusters are discovered in static data (not changed over the time) and integrated with triclusters to improve performance and model explainability. The proposed methodologies were used to predict the need for non-invasive ventilation (NIV) in patients with Amyotrophic Lateral Sclerosis (ALS). In this case study, we learnt a general prognostic model from all patients data and specialized models after patient stratification into Slow, Neutral and Fast progressors. Our results show that besides comparable and sometimes outperforming results, when compared to a high performing random forest classifier, our predictive models enhance prediction with the potentialities of model interpretability. Indeed, when using triclusters (and biclusters) as predictors, we promoting the use of highly interpretable disease progression patterns. Furthermore, when used for prognostic prediction in ALS, our interpretable predictive models unravelled clinically relevant and group-specific disease progression patterns, helping clinicians to understand the high heterogeneity of ALS disease progression. Results further show that the temporal restriction is effective in improving the effectiveness of the predictive models

    Evolutionary approaches for feature selection in biological data

    Get PDF
    Data mining techniques have been used widely in many areas such as business, science, engineering and medicine. The techniques allow a vast amount of data to be explored in order to extract useful information from the data. One of the foci in the health area is finding interesting biomarkers from biomedical data. Mass throughput data generated from microarrays and mass spectrometry from biological samples are high dimensional and is small in sample size. Examples include DNA microarray datasets with up to 500,000 genes and mass spectrometry data with 300,000 m/z values. While the availability of such datasets can aid in the development of techniques/drugs to improve diagnosis and treatment of diseases, a major challenge involves its analysis to extract useful and meaningful information. The aims of this project are: 1) to investigate and develop feature selection algorithms that incorporate various evolutionary strategies, 2) using the developed algorithms to find the “most relevant” biomarkers contained in biological datasets and 3) and evaluate the goodness of extracted feature subsets for relevance (examined in terms of existing biomedical domain knowledge and from classification accuracy obtained using different classifiers). The project aims to generate good predictive models for classifying diseased samples from control

    Investigating data mining techniques for extracting information from Alzheimer\u27s disease data

    Get PDF
    Data mining techniques have been used widely in many areas such as business, science, engineering and more recently in clinical medicine. These techniques allow an enormous amount of high dimensional data to be analysed for extraction of interesting information as well as the construction of models for prediction. One of the foci in health related research is Alzheimer\u27s disease which is currently a non-curable disease where diagnosis can only be confirmed after death via an autopsy. Using multi-dimensional data and the applications of data mining techniques, researchers hope to find biomarkers that will diagnose Alzheimer\u27s disease as early as possible. The primary purpose of this research project is to investigate the application of data mining techniques for finding interesting biomarkers from a set of Alzheimer\u27s disease related data. The findings from this project will help to analyse the data more effectively and contribute to methods of providing earlier diagnosis of the disease

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Design of a multi-signature ensemble classifier predicting neuroblastoma patients' outcome

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Neuroblastoma is the most common pediatric solid tumor of the sympathetic nervous system. Development of improved predictive tools for patients stratification is a crucial requirement for neuroblastoma therapy. Several studies utilized gene expression-based signatures to stratify neuroblastoma patients and demonstrated a clear advantage of adding genomic analysis to risk assessment. There is little overlapping among signatures and merging their prognostic potential would be advantageous. Here, we describe a new strategy to merge published neuroblastoma related gene signatures into a single, highly accurate, Multi-Signature Ensemble (MuSE)-classifier of neuroblastoma (NB) patients outcome.</p> <p>Methods</p> <p>Gene expression profiles of 182 neuroblastoma tumors, subdivided into three independent datasets, were used in the various phases of development and validation of neuroblastoma NB-MuSE-classifier. Thirty three signatures were evaluated for patients' outcome prediction using 22 classification algorithms each and generating 726 classifiers and prediction results. The best-performing algorithm for each signature was selected, validated on an independent dataset and the 20 signatures performing with an accuracy > = 80% were retained.</p> <p>Results</p> <p>We combined the 20 predictions associated to the corresponding signatures through the selection of the best performing algorithm into a single outcome predictor. The best performance was obtained by the Decision Table algorithm that produced the NB-MuSE-classifier characterized by an external validation accuracy of 94%. Kaplan-Meier curves and log-rank test demonstrated that patients with good and poor outcome prediction by the NB-MuSE-classifier have a significantly different survival (p < 0.0001). Survival curves constructed on subgroups of patients divided on the bases of known prognostic marker suggested an excellent stratification of localized and stage 4s tumors but more data are needed to prove this point.</p> <p>Conclusions</p> <p>The NB-MuSE-classifier is based on an ensemble approach that merges twenty heterogeneous, neuroblastoma-related gene signatures to blend their discriminating power, rather than numeric values, into a single, highly accurate patients' outcome predictor. The novelty of our approach derives from the way to integrate the gene expression signatures, by optimally associating them with a single paradigm ultimately integrated into a single classifier. This model can be exported to other types of cancer and to diseases for which dedicated databases exist.</p

    Doctor of Philosophy

    Get PDF
    dissertationIn its report To Err is Human, The Institute of Medicine recommended the implementation of internal and external voluntary and mandatory automatic reporting systems to increase detection of adverse events. Knowledge Discovery in Databases (KDD) allows the detection of patterns and trends that would be hidden or less detectable if analyzed by conventional methods. The objective of this study was to examine novel KDD techniques used by other disciplines to create predictive models using healthcare data and validate the results through clinical domain expertise and performance measures. Patient records for the present study were extracted from the enterprise data warehouse (EDW) from Intermountain Healthcare. Patients with reported adverse events were identified from ICD9 codes. A clinical classification of the ICD9 codes was developed, and the clinical categories were analyzed for risk factors for adverse events including adverse drug events. Pharmacy data were categorized and used for detection of drugs administered in temporal sequence with antidote drugs. Data sampling and data boosting algorithms were used as signal amplification techniques. Decision trees, Naïve Bayes, Canonical Correlation Analysis, and Sequence Analysis were used as machine learning algorithms. iv Performance measures of the classification algorithms demonstrated statistically significant improvement after the transformation of the dataset through KDD techniques, data boosting and sampling. Domain expertise was applied to validate clinical significance of the results. KDD methodologies were applied successfully to a complex clinical dataset. The use of these methodologies was empirically proven effective in healthcare data through statistically significant measures and clinical validation. Although more research is required, we demonstrated the usefulness of KDD methodologies in knowledge extraction from complex clinical data

    Early Diagnosis for Dengue Disease Prediction Using Efficient Machine Learning Techniques Based on Clinical Data

    Get PDF
    Dengue fever is a worldwide issue, especially in Yemen. Although early detection is critical to reducing dengue disease deaths, accurate dengue diagnosis requires a long time due to the numerous clinical examinations. Thus, this issue necessitates the development of a new diagnostic schema. The objective of this work is to develop a diagnostic model for the earlier diagnosis of dengue disease using Efficient Machine Learning Techniques (EMLT). This paper proposed prediction models for dengue disease based on EMLT. Five different efficient machine learning models, including K-Nearest Neighbor (KNN), Gradient Boosting Classifier (GBC), Extra Tree Classifier (ETC), eXtreme Gradient Boosting (XGB), and Light Gradient Boosting Machine (LightGBM). All classifiers are trained and tested on the dataset using 10-Fold Cross-Validation and Holdout Cross-Validation approaches. On a test set, all models were evaluated using different metrics: accuracy, F1-sore, Recall, Precision, AUC, and operating time. Based on the findings, the ETC model achieved the highest accuracy in Hold-out and 10-fold cross-validation, with 99.12 % and 99.03 %, respectively. In the Holdout cross-validation approach, we conclude that the best classifier with high accuracy is ETC, which achieved 99.12 %. Finally, the experimental results indicate that classifier performance in holdout cross-validation outperforms 10-fold cross-validation. Accordingly, the proposed dengue prediction system demonstrates its efficacy and effectiveness in assisting doctors in accurately predicting dengue disease

    Data mining of many-attribute data : investigating the interaction between feature selection strategy and statistical features of datasets

    Get PDF
    In many datasets, there is a very large number of attributes (e.g. many thousands). Such datasets can cause many problems for machine learning methods. Various feature selection (FS) strategies have been developed to address these problems. The idea of an FS strategy is to reduce the number of features in a dataset (e.g. from many thousands to a few hundred) so that machine learning and/or statistical analysis can be done much more quickly and effectively. Obviously, FS strategies attempt to select the features that are most important, considering the machine learning task to be done. The work presented in this dissertation concerns the comparison between several popular feature selection strategies, and, in particular, investigation of the interaction between feature selection strategy and simple statistical features of the dataset. The basic hypothesis, not investigated before, is that the correct choice of FS strategy for a particular dataset should be based on a simple (at least) statistical analysis of the dataset. First, we examined the performance of several strategies on a selection of datasets. Strategies examined were: four widely-used FS strategies (Correlation, Relief F, Evolutionary Algorithm, no-feature-selection), several feature bias (FB) strategies (in which the machine learning method considers all features, but makes use of bias values suggested by the FB strategy), and also combinations of FS and FB strategies. The results showed us that FB methods displayed strong capability on some datasets and that combined strategies were also often successful. Examining these results, we noted that patterns of performance were not immediately understandable. This led to the above hypothesis (one of the main contributions of the thesis) that statistical features of the dataset are an important consideration when choosing an FS strategy. We then investigated this hypothesis with several further experiments. Analysis of the results revealed that a simple statistical feature of a dataset, that can be easily pre-calculated, has a clear relationship with the performance Silang Luo PHD-06-2009 Page 2 of certain FS methods, and a similar relationship with differences in performance between certain pairs of FS strategies. In particular, Correlation based FS is a very widely-used FS technique based on the basic hypothesis that good feature sets contain features that are highly correlated with the class, yet uncorrelated with each other. By analysing the outcome of several FS strategies on different artificial datasets, the experiments suggest that CFS is never the best choice for poorly correlated data. Finally, considering several methods, we suggest tentative guidelines for choosing an FS strategy based on simply calculated measures of the dataset

    Optimization Based Tumor Classification from Microarray Gene Expression Data

    Get PDF
    An important use of data obtained from microarray measurements is the classification of tumor types with respect to genes that are either up or down regulated in specific cancer types. A number of algorithms have been proposed to obtain such classifications. These algorithms usually require parameter optimization to obtain accurate results depending on the type of data. Additionally, it is highly critical to find an optimal set of markers among those up or down regulated genes that can be clinically utilized to build assays for the diagnosis or to follow progression of specific cancer types. In this paper, we employ a mixed integer programming based classification algorithm named hyper-box enclosure method (HBE) for the classification of some cancer types with a minimal set of predictor genes. This optimization based method which is a user friendly and efficient classifier may allow the clinicians to diagnose and follow progression of certain cancer types.We apply HBE algorithm to some well known data sets such as leukemia, prostate cancer, diffuse large B-cell lymphoma (DLBCL), small round blue cell tumors (SRBCT) to find some predictor genes that can be utilized for diagnosis and prognosis in a robust manner with a high accuracy. Our approach does not require any modification or parameter optimization for each data set. Additionally, information gain attribute evaluator, relief attribute evaluator and correlation-based feature selection methods are employed for the gene selection. The results are compared with those from other studies and biological roles of selected genes in corresponding cancer type are described.The performance of our algorithm overall was better than the other algorithms reported in the literature and classifiers found in WEKA data-mining package. Since it does not require a parameter optimization and it performs consistently very high prediction rate on different type of data sets, HBE method is an effective and consistent tool for cancer type prediction with a small number of gene markers
    corecore