257 research outputs found

    Feature Selection for Identification of Transcriptome and Clinical Biomarkers for Relapse in Colon Cancer

    Get PDF
    This study attempts to find good predictive biomarkers for recurrence in colon cancer between two data sources of both mRNA and miRNA expression from frozen tumor samples. In total four datasets, two data sources and two data types, were examined; mRNA TCGA (n=446), miRNA TCGA (n=416), mRNA HDS (n=79), and miRNA HDS (n=128). The intersection of the feature space of both data sources was used in the analysis such that models trained on one data source could be tested on the other. A set of wrapper and filter methods were applied to each dataset separately to perform feature selection, and from each model the k best number of features was selected, where k is taken from a list of set numbers between 2 and 250. A randomized grid search was used to optimize four classifiers over their hyperparameter space where an additional hyperparameter was the feature selection method used. All models were trained with cross validation and tested on the other data source to determine generalization. Most models failed to generalize to the other data source, showing clear signs of overfitting. Furthermore, there was next to no overlap between selected features from one data source to the other, indicating that the underlying feature distribution was different between the two sources, which is shown to be the case in a few examples. The best generalizing models where based on clinical information and second best was on the combined feature space of mRNA and miRNA data.Master's Thesis in InformaticsINF399MAMN-PROGMAMN-IN

    Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis

    Get PDF
    Background and Objectives: This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. Methods: In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. Results: It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Sup- port Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). Conclusions: It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society

    GAdaboost: Accelerating adaboost feature selection with genetic algorithms

    Get PDF
    Throughout recent years Machine Learning has acquired attention, due to the abundant data. Thus, devising techniques to reduce the dimensionality of data has been on going. Object detection is one of the Machine Learning techniques which suffer from this draw back. As an example, one of the most famous object detection frameworks is the Viola-Jones Rapid Object Detector, which suffers from a lengthy training process due to the vast search space, which can reach more than 160,000 features for a 24X24 image. The Viola-Jones Rapid Object Detector also uses Adaboost, which is a brute force method, and is required to pass by the set of all possible features in order to train the classifiers. Consequently, ways for reducing the whole feature set into a smaller representative one, eliminating those features that have non relevant information, were devised. The most commonly used technique for this is Feature Selection with its three categories: Filters, Wrappers and Embedded. Feature Selection has proven its success in providing fast and accurate classifiers. Wrapper methods harvest the power of evolutionary computing, most commonly Genetic Algorithms, in finding the set of representative features. This is mostly due to the Advantage of Genetic Algorithms and their power in finding adequate solutions more efficiently. In this thesis we propose GAdaboost: A Genetic Algorithm to accelerate the training procedure of the Viola-Jones Rapid Object Detector through Feature Selection. Specifically, we propose to limit the Adaboost search within a sub-set of the huge feature space, while evolving this subset following a Genetic Algorithm. Experiments demonstrate that our proposed GAdaboost is up to 3.7 times faster than Adaboost. We also demonstrate that the price of this speedup is a mere decrease (3%, 4%) in detection accuracy when tested on FDDB benchmark face detection set, and Caltech Web Faces respectivel

    Enhanced clustering analysis pipeline for performance analysis of parallel applications

    Get PDF
    Clustering analysis is widely used to stratify data in the same cluster when they are similar according to the specific metrics. We can use the cluster analysis to group the CPU burst of a parallel application, and the regions on each process in-between communication calls or calls to the parallel runtime. The resulting clusters obtained are the different computational trends or phases that appear in the application. These clusters are useful to understand the behavior of the computation part of the application and focus the analyses on those that present performance issues. Although density-based clustering algorithms are a powerful and efficient tool to summarize this type of information, their traditional user-guided clustering methodology has many shortcomings and deficiencies in dealing with the complexity of data, the diversity of data structures, high-dimensionality of data, and the dramatic increase in the amount of data. Consequently, the majority of DBSCAN-like algorithms have weaknesses to handle high-dimensionality and/or Multi-density data, and they are sensitive to their hyper-parameter configuration. Furthermore, extracting insight from the obtained clusters is an intuitive and manual task. To mitigate these weaknesses, we have proposed a new unified approach to replace the user-guided clustering with an automated clustering analysis pipeline, called Enhanced Cluster Identification and Interpretation (ECII) pipeline. To build the pipeline, we propose novel techniques including Robust Independent Feature Selection, Feature Space Curvature Map, Organization Component Analysis, and hyper-parameters tuning to feature selection, density homogenization, cluster interpretation, and model selection which are the main components of our machine learning pipeline. This thesis contributes four new techniques to the Machine Learning field with a particular use case in Performance Analytics field. The first contribution is a novel unsupervised approach for feature selection on noisy data, called Robust Independent Feature Selection (RIFS). Specifically, we choose a feature subset that contains most of the underlying information, using the same criteria as the Independent component analysis. Simultaneously, the noise is separated as an independent component. The second contribution of the thesis is a parametric multilinear transformation method to homogenize cluster densities while preserving the topological structure of the dataset, called Feature Space Curvature Map (FSCM). We present a new Gravitational Self-organizing Map to model the feature space curvature by plugging the concepts of gravity and fabric of space into the Self-organizing Map algorithm to mathematically describe the density structure of the data. To homogenize the cluster density, we introduce a novel mapping mechanism to project the data from the non-Euclidean curved space to a new Euclidean flat space. The third contribution is a novel topological-based method to study potentially complex high-dimensional categorized data by quantifying their shapes and extracting fine-grain insights from them to interpret the clustering result. We introduce our Organization Component Analysis (OCA) method for the automatic arbitrary cluster-shape study without an assumption about the data distribution. Finally, to tune the DBSCAN hyper-parameters, we propose a new tuning mechanism by combining techniques from machine learning and optimization domains, and we embed it in the ECII pipeline. Using this cluster analysis pipeline with the CPU burst data of a parallel application, we provide the developer/analyst with a high-quality SPMD computation structure detection with the added value that reflects the fine grain of the computation regions.El análisis de conglomerados se usa ampliamente para estratificar datos en el mismo conglomerado cuando son similares según las métricas específicas. Nosotros puede usar el análisis de clúster para agrupar la ráfaga de CPU de una aplicación paralela y las regiones en cada proceso intermedio llamadas de comunicación o llamadas al tiempo de ejecución paralelo. Los clusters resultantes obtenidos son las diferentes tendencias computacionales o fases que aparecen en la solicitud. Estos clusters son útiles para entender el comportamiento de la parte de computación del aplicación y centrar los análisis en aquellos que presenten problemas de rendimiento. Aunque los algoritmos de agrupamiento basados en la densidad son una herramienta poderosa y eficiente para resumir este tipo de información, su La metodología tradicional de agrupación en clústeres guiada por el usuario tiene muchas deficiencias y deficiencias al tratar con la complejidad de los datos, la diversidad de estructuras de datos, la alta dimensionalidad de los datos y el aumento dramático en la cantidad de datos. En consecuencia, el La mayoría de los algoritmos similares a DBSCAN tienen debilidades para manejar datos de alta dimensionalidad y/o densidad múltiple, y son sensibles a su configuración de hiperparámetros. Además, extraer información de los clústeres obtenidos es una forma intuitiva y tarea manual Para mitigar estas debilidades, hemos propuesto un nuevo enfoque unificado para reemplazar el agrupamiento guiado por el usuario con un canalización de análisis de agrupamiento automatizado, llamada canalización de identificación e interpretación de clúster mejorada (ECII). para construir el tubería, proponemos técnicas novedosas que incluyen la selección robusta de características independientes, el mapa de curvatura del espacio de características, Análisis de componentes de la organización y ajuste de hiperparámetros para la selección de características, homogeneización de densidad, agrupación interpretación y selección de modelos, que son los componentes principales de nuestra canalización de aprendizaje automático. Esta tesis aporta cuatro nuevas técnicas al campo de Machine Learning con un caso de uso particular en el campo de Performance Analytics. La primera contribución es un enfoque novedoso no supervisado para la selección de características en datos ruidosos, llamado Robust Independent Feature. Selección (RIFS).Específicamente, elegimos un subconjunto de funciones que contiene la mayor parte de la información subyacente, utilizando el mismo criterios como el análisis de componentes independientes. Simultáneamente, el ruido se separa como un componente independiente. La segunda contribución de la tesis es un método de transformación multilineal paramétrica para homogeneizar densidades de clústeres mientras preservando la estructura topológica del conjunto de datos, llamado Mapa de Curvatura del Espacio de Características (FSCM). Presentamos un nuevo Gravitacional Mapa autoorganizado para modelar la curvatura del espacio característico conectando los conceptos de gravedad y estructura del espacio en el Algoritmo de mapa autoorganizado para describir matemáticamente la estructura de densidad de los datos. Para homogeneizar la densidad del racimo, introducimos un mecanismo de mapeo novedoso para proyectar los datos del espacio curvo no euclidiano a un nuevo plano euclidiano espacio. La tercera contribución es un nuevo método basado en topología para estudiar datos categorizados de alta dimensión potencialmente complejos mediante cuantificando sus formas y extrayendo información detallada de ellas para interpretar el resultado de la agrupación. presentamos nuestro Método de análisis de componentes de organización (OCA) para el estudio automático de forma arbitraria de conglomerados sin una suposición sobre el distribución de datos.Postprint (published version

    GRAIMATTER Green Paper:Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)

    Get PDF
    TREs are widely, and increasingly used to support statistical analysis of sensitive data across a range of sectors (e.g., health, police, tax and education) as they enable secure and transparent research whilst protecting data confidentiality.There is an increasing desire from academia and industry to train AI models in TREs. The field of AI is developing quickly with applications including spotting human errors, streamlining processes, task automation and decision support. These complex AI models require more information to describe and reproduce, increasing the possibility that sensitive personal data can be inferred from such descriptions. TREs do not have mature processes and controls against these risks. This is a complex topic, and it is unreasonable to expect all TREs to be aware of all risks or that TRE researchers have addressed these risks in AI-specific training.GRAIMATTER has developed a draft set of usable recommendations for TREs to guard against the additional risks when disclosing trained AI models from TREs. The development of these recommendations has been funded by the GRAIMATTER UKRI DARE UK sprint research project. This version of our recommendations was published at the end of the project in September 2022. During the course of the project, we have identified many areas for future investigations to expand and test these recommendations in practice. Therefore, we expect that this document will evolve over time. The GRAIMATTER DARE UK sprint project has also developed a minimal viable product (MVP) as a suite of attack simulations that can be applied by TREs and can be accessed here (https://github.com/AI-SDC/AI-SDC).If you would like to provide feedback or would like to learn more, please contact Smarti Reel ([email protected]) and Emily Jefferson ([email protected]).The summary of our recommendations for a general public audience can be found at DOI: 10.5281/zenodo.708951

    Dynamic survival prediction combining landmarking with a machine learning ensemble: Methodology and empirical comparison

    Get PDF
    Dynamic prediction models provide predicted survival probabilities that can be updated over time for an individual as new measurements become available. Two techniques for dynamic survival prediction with longitudinal data dominate the statistical literature: joint modelling and landmarking. There is substantial interest in the use of machine learning methods for prediction; however, their use in the context of dynamic survival prediction has been limited. We show how landmarking can be combined with a machine learning ensemble—the Super Learner. The ensemble combines predictions from different machine learning and statistical algorithms with the goal of achieving improved performance. The proposed approach exploits discrete time survival analysis techniques to enable the use of machine learning algorithms for binary outcomes. We discuss practical and statistical considerations involved in implementing the ensemble. The methods are illustrated and compared using longitudinal data from the UK Cystic Fibrosis Registry. Standard landmarking and the landmark Super Learner approach resulted in similar cross-validated predictive performance, in this case, outperforming joint modelling

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity

    BETTER MODELS FOR HIGH-STAKES TASKS

    Get PDF
    The intersection of machine learning and healthcare has the potential to transform medical diagnosis, treatment, and research. Machine learning models can analyze vast amounts of medical data and identify patterns that may be too complex for human analysis. However, one of the major challenges in this field is building trust between users and the model. Due to things like high false alarm rate and the black box nature of machine learning models, patients and medical professionals need to understand how the model arrives at its recommendations. In this work, we present several methods that aim to improve machine learning models in high-stakes environments like healthcare. Our work unifies two sub-fields of machine learning, explainable AI, and uncertainty quantification. First we develop a model-agnostic approach to deliver instance-level explanations using influence functions. Next, we show that these influence functions function are fairly robust across domains. Then, we develop an efficient method that reduces model uncertainty while modeling data uncertainty via Bayesian Neural Networks. Finally, we show that when combined our methods deliver significant utility beyond traditional methods while retaining a high level of performance via a real world deployment. Overall, the integration of uncertainty quantification and explainable AI can help overcome some of the major challenges of machine learning in healthcare. Together, they can provide healthcare professionals with powerful tools for improving patient outcomes and advancing medical research

    Prediction of lung tumor types based on protein attributes by machine learning algorithms

    Full text link

    Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection Within Fruit Juice Classification

    Get PDF
    Research article[Abstract] Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected
    corecore