15 research outputs found

    Multimodal wavelet embedding representation for data combination (MaWERiC): integratingmagnetic resonance imaging and spectroscopy for prostate cancer detection,”

    Get PDF
    Recently, both Magnetic Resonance (MR) Imaging (MRI) and Spectroscopy (MRS) have emerged as promising tools for detection of prostate cancer (CaP). However, due to the inherent dimensionality differences in MR imaging and spectral information, quantitative integration of T 2 weighted MRI (T 2 w MRI) and MRS for improved CaP detection has been a major challenge. In this paper, we present a novel computerized decision support system called multimodal wavelet embedding representation for data combination (MaWERiC) that employs, (i) wavelet theory to extract 171 Haar wavelet features from MRS and 54 Gabor features from T 2 w MRI, (ii) dimensionality reduction to individually project wavelet features from MRS and T 2 w MRI into a common reduced Eigen vector space, and (iii), a random forest classifier for automated prostate cancer detection on a per voxel basis from combined 1.5 T in vivo MRI and MRS. A total of 36 1.5 T endorectal in vivo T 2 w MRI and MRS patient studies were evaluated per voxel by MaWERiC using a three-fold cross validation approach over 25 iterations. Ground truth for evaluation of results was obtained by an expert radiologist annotations of prostate cancer on a per voxel basis who compared each MRI section with corresponding ex vivo wholemount histology sections with the disease extent mapped out on histology. Results suggest that MaWERiC based MRS T 2 w meta-classifier (mean AUC, m = 0.89 AE 0.02) significantly outperformed (i) a T 2 w MRI (using wavelet texture features) classifier (m = 0.55 AE 0.02), (ii) a MRS (using metabolite ratios) classifier (m = 0.77 AE 0.03), (iii) a decision fusion classifier obtained by combining individual T 2 w MRI and MRS classifier outputs (m = 0.85 AE 0.03), and (iv) a data combination method involving a combination of metabolic MRS and MR signal intensity features (m = 0.66 AE 0.02)

    Predicting lifespan-extending chemical compounds for C. elegans with machine learning and biologically interpretable features

    Get PDF
    Recently, there has been a growing interest in the development of pharmacological interventions targeting ageing, as well as in the use of machine learning for analysing ageing-related data. In this work, we use machine learning methods to analyse data from DrugAge, a database of chemical compounds (including drugs) modulating lifespan in model organisms. To this end, we created four types of datasets for predicting whether or not a compound extends the lifespan of C. elegans (the most frequent model organism in DrugAge), using four different types of predictive biological features, based on: compound-protein interactions, interactions between compounds and proteins encoded by ageing-related genes, and two types of terms annotated for proteins targeted by the compounds, namely Gene Ontology (GO) terms and physiology terms from the WormBase’s Phenotype Ontology. To analyse these datasets, we used a combination of feature selection methods in a data pre-processing phase and the well-established random forest algorithm for learning predictive models from the selected features. In addition, we interpreted the most important features in the two best models in light of the biology of ageing. One noteworthy feature was the GO term “Glutathione metabolic process”, which plays an important role in cellular redox homeostasis and detoxification. We also predicted the most promising novel compounds for extending lifespan from a list of previously unlabelled compounds. These include nitroprusside, which is used as an antihypertensive medication. Overall, our work opens avenues for future work in employing machine learning to predict novel life-extending compounds

    Learning predictive models from massive, semantically disparate data

    Get PDF
    Machine learning approaches offer some of the most successful techniques for constructing predictive models from data. However, applying such techniques in practice requires overcoming several challenges: infeasibility of centralized access to the data because of the massive size of some of the data sets that often exceeds the size of memory available to the learner, distributed nature of data, access restrictions, data fragmentation, semantic disparities between the data sources, and data sources that evolve spatially or temporally (e.g. data streams and genomic data sources in which new data is being submitted continuously). Learning using statistical queries and semantic correspondences that present a unified view of disparate data sources to the learner offer a powerful general framework for addressing some of these challenges. Against this background, this thesis describes (1) approaches to deal with missing values in the statistical query based algorithms for building predictors (Nayve Bayes and decision trees) and the techniques to minimize the number of required queries in such a setting. (2) Sufficient statistics based algorithms for constructing and updating sequence classifiers. (3) Reduction of several aspects of learning from semantically disparate data sources (such as (a) how errors in mappings affect the accuracy of the learned model and (b) how to choose an optimal mapping from among a set of alternative expert-supplied or automatically generated mappings) to the well-studied problems of domain adaptation and learning in presence of noise and (4) a software for learning predictive models from semantically disparate data

    Novel feature selection methods for high dimensional data

    Get PDF
    [Resumen] La selección de características se define como el proceso de detectar las características relevantes y descartar las irrelevantes, con el objetivo de obtener un subconjunto de características más pequeño que describa adecuadamente el problema dado con una degradación mínima o incluso con una mejora del rendimiento. Con la llegada de los conjuntos de alta dimensión -tanto en muestras como en características-, se ha vuelto indispensable la identifícación adecuada de las características relevantes en escenarios del mundo real. En este contexto, los diferentes métodos disponibles se encuentran con un nuevo reto en cuanto a aplicabilidad y escalabilidad. Además, es necesario desarrollar nuevos métodos que tengan en cuenta estas particularidades de la alta dimensión. Esta tesis está dedicada a la investigación en selección de características y a su aplicación a datos reales de alta dimensión. La primera parte de este trabajo trata del análisis de los métodos de selección de características existentes, para comprobar su idoneidad frente a diferentes retos y para poder proporcionar nuevos resultados a los investigadores de selección de características. Para esto, se han aplicado las técnicas más populares a problemas reales, con el objetivo de obtener no sólo mejoras en rendimiento sino también para permitir su aplicación en tiempo real. Además de la eficiencia, la escalabilidad también es un aspecto crítico en aplicaciones de gran escala. La eficacia de los métodos de selección de características puede verse significativamente degradada, si no totalmente inaplicable, cuando el tamaño de los datos se incrementa continuamente. Por este motivo, la escalabilidad de los métodos de selección de características también debe ser analizada. Tras llevar a cabo un análisis en profundidad de los métodos de selección de características existentes, la segunda parte de esta tesis se centra en el desarrollo de nuevas técnicas. Debido a que la mayoría de métodos de selección existentes necesitan que los datos sean discretos, la primera aproximación propuesta consiste en la combinación de un discretizador, un filtro y un clasificador, obteniendo resultados prometedores en escenarios diferentes. En un intento de introducir diversidad, la segunda propuesta trata de usar un conjunto de filtros en lugar de uno sólo, con el objetivo de liberar al usuario de tener que decidir que técnica es la más adecuada para un problema dado. La tercera técnica propuesta en esta tesis no solo considera la relevancia de las características sino también su coste asociado -económico o en cuanto a tiempo de ejecución-, por lo que se presenta una metodología general para selección de características basada en coste. Por último, se proponen varias estrategias para distribuir y paralelizar la selección de características, ya que transformar un problema de gran escala en varios problemas de pequeña escala puede llevar a mejoras en el tiempo de procesado y, en algunas ocasiones, en precisión de clasificación

    More than the sum of its parts – pattern mining, neural networks, and how they complement each other

    Get PDF
    In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that these fields complement each other and propose to combine them to gain from each other’s strengths. We, first, show how to efficiently discover succinct and non-redundant sets of patterns that provide insight into data beyond conjunctive statements. We leverage the interpretability of such patterns to unveil how and which information flows through neural networks, as well as what characterizes their decisions. Conversely, we show how to combine continuous optimization with pattern discovery, proposing a neural network that directly encodes discrete patterns, which allows us to apply pattern mining at a scale orders of magnitude larger than previously possible. Large neural networks are, however, exceedingly expensive to train for which ‘lottery tickets’ – small, well-trainable sub-networks in randomly initialized neural networks – offer a remedy. We identify theoretical limitations of strong tickets and overcome them by equipping these tickets with the property of universal approximation. To analyze whether limitations in ticket sparsity are algorithmic or fundamental, we propose a framework to plant and hide lottery tickets. With novel ticket benchmarks we then conclude that the limitation is likely algorithmic, encouraging further developments for which our framework offers means to measure progress.In dieser Arbeit befassen wir uns mit Mustersuche und Deep Learning. Oft als gegensätzlich betrachtet, verbinden wir diese Felder, um von den Stärken beider zu profitieren. Wir zeigen erst, wie man effizient prägnante Mengen von Mustern entdeckt, die Einsichten über konjunktive Aussagen hinaus geben. Wir nutzen dann die Interpretierbarkeit solcher Muster, um zu verstehen wie und welche Information durch neuronale Netze fließen und was ihre Entscheidungen charakterisiert. Umgekehrt verbinden wir kontinuierliche Optimierung mit Mustererkennung durch ein neuronales Netz welches diskrete Muster direkt abbildet, was Mustersuche in einigen Größenordnungen höher erlaubt als bisher möglich. Das Training großer neuronaler Netze ist jedoch extrem teuer, für das ’Lotterietickets’ – kleine, gut trainierbare Subnetzwerke in zufällig initialisierten neuronalen Netzen – eine Lösung bieten. Wir zeigen theoretische Einschränkungen von starken Tickets und wie man diese überwindet, indem man die Tickets mit der Eigenschaft der universalen Approximierung ausstattet. Um zu beantworten, ob Einschränkungen in Ticketgröße algorithmischer oder fundamentaler Natur sind, entwickeln wir ein Rahmenwerk zum Einbetten und Verstecken von Tickets, die als Modellfälle dienen. Basierend auf unseren Ergebnissen schließen wir, dass die Einschränkungen algorithmische Ursachen haben, was weitere Entwicklungen begünstigt, für die unser Rahmenwerk Fortschrittsevaluierungen ermöglicht
    corecore