436 research outputs found

    Making nonlinear manifold learning models interpretable: The manifold grand tour

    Get PDF
    Dimensionality reduction is required to produce visualisations of high dimensional data. In this framework, one of the most straightforward approaches to visualising high dimensional data is based on reducing complexity and applying linear projections while tumbling the projection axes in a defined sequence which generates a Grand Tour of the data. We propose using smooth nonlinear topographic maps of the data distribution to guide the Grand Tour, increasing the effectiveness of this approach by prioritising the linear views of the data that are most consistent with global data structure in these maps. A further consequence of this approach is to enable direct visualisation of the topographic map onto projective spaces that discern structure in the data. The experimental results on standard databases reported in this paper, using self-organising maps and generative topographic mapping, illustrate the practical value of the proposed approach. The main novelty of our proposal is the definition of a systematic way to guide the search of data views in the grand tour, selecting and prioritizing some of them, based on nonlinear manifold models

    GUASOM: An Adaptive Visualization Tool for Unsupervised Clustering in Spectrophotometric Astronomical Surveys

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] We present an adaptive visualization tool for unsupervised classification of astronomical objects in a Big Data context such as the one found in the increasingly popular large spectrophotometric sky surveys. This tool is based on an artificial intelligence technique, Kohonen’s self-organizing maps, and our goal is to facilitate the analysis work of the experts by means of oriented domain visualizations, which is impossible to achieve by using a generic tool. We designed a client-server that handles the data treatment and computational tasks to give responses as quickly as possible, and we used JavaScript Object Notation to pack the data between server and client. We optimized, parallelized, and evenly distributed the necessary calculations in a cluster of machines. By applying our clustering tool to several databases, we demonstrated the main advantages of an unsupervised approach: the classification is not based on pre-established models, thus allowing the “natural classes” present in the sample to be discovered, and it is suited to isolate atypical cases, with the important potential for discovery that this entails. Gaia Utility for the Analysis of self-organizing maps is an analysis tool that has been developed in the context of the Data Processing and Analysis Consortium, which processes and analyzes the observations made by ESA’s Gaia satellite (European Space Agency) and prepares the mission archive that is presented to the international community in sequential periodic publications. Our tool is useful not only in the context of the Gaia mission, but also allows segmenting the information present in any other massive spectroscopic or spectrophotometric database.This work made use of the infrastructures acquired with grants provided by the State Research Agency (AEI) of the Spanish Government and the European Regional Development Fund (FEDER), RTI2018-095076-B-C22. We acknowledge support from CIGUS-CITIC, funded by Xunta de Galicia and the European Union (FEDER Galicia 2014-2020 Program) through grant ED431G 2019/01 and research consolidation grant ED431B 2021/36. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC), https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration. We also want to acknowledge Alhambra survey funded by the Spanish Goverment under Grant AYA2006-14056. Open Access funding provided thanks to the Universidade da Coruña/CISUG agreement with Springer NatureXunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431B 2021/3

    How Many Dissimilarity/Kernel Self Organizing Map Variants Do We Need?

    Full text link
    In numerous applicative contexts, data are too rich and too complex to be represented by numerical vectors. A general approach to extend machine learning and data mining techniques to such data is to really on a dissimilarity or on a kernel that measures how different or similar two objects are. This approach has been used to define several variants of the Self Organizing Map (SOM). This paper reviews those variants in using a common set of notations in order to outline differences and similarities between them. It discusses the advantages and drawbacks of the variants, as well as the actual relevance of the dissimilarity/kernel SOM for practical applications

    Supervised learning of short and high-dimensional temporal sequences for life science measurements

    Full text link
    The analysis of physiological processes over time are often given by spectrometric or gene expression profiles over time with only few time points but a large number of measured variables. The analysis of such temporal sequences is challenging and only few methods have been proposed. The information can be encoded time independent, by means of classical expression differences for a single time point or in expression profiles over time. Available methods are limited to unsupervised and semi-supervised settings. The predictive variables can be identified only by means of wrapper or post-processing techniques. This is complicated due to the small number of samples for such studies. Here, we present a supervised learning approach, termed Supervised Topographic Mapping Through Time (SGTM-TT). It learns a supervised mapping of the temporal sequences onto a low dimensional grid. We utilize a hidden markov model (HMM) to account for the time domain and relevance learning to identify the relevant feature dimensions most predictive over time. The learned mapping can be used to visualize the temporal sequences and to predict the class of a new sequence. The relevance learning permits the identification of discriminating masses or gen expressions and prunes dimensions which are unnecessary for the classification task or encode mainly noise. In this way we obtain a very efficient learning system for temporal sequences. The results indicate that using simultaneous supervised learning and metric adaptation significantly improves the prediction accuracy for synthetically and real life data in comparison to the standard techniques. The discriminating features, identified by relevance learning, compare favorably with the results of alternative methods. Our method permits the visualization of the data on a low dimensional grid, highlighting the observed temporal structure

    Advanced Statistical Machine Learning Methods for the Analysis of Neurophysiologic Data with Medical Application

    Get PDF
    Transcranial magnetic stimulation procedures use a magnetic field to carry a short-lasting electrical current pulse into the brain, where it stimulates neurons, particularly in superficial regions of the cerebral cortex. It is a powerfull tool to calculate several parameters related to the intracortical excitability and inhibition of the motor cortex. The cortical silent period (CSP), evoked by magnetic stimulation, corresponds to the suppression of muscle activity for a short period after a muscle response to a magnetic stimulation. The duration of the CSP is paramount to assess intracortical inhibition, and it is known to be correlated with the prognosis of stroke patients’ motor ability. Current mechanisms to estimate the duration of the CSP are mostly based on the analysis of raw electromyographical (EMG) signal and they are very sensitive to the presence of noise. This master thesis is devoted to the analysis of the EMG signal of stroke patients under rehabilitation. The use of advanced statistical machine learning techniques that behave robustly in the presence of noise for this analysis allows us to accurately estimate signal parameters such as the CSP. The research reported in this thesis provides us with a first evidence about their applicability in other areas of neuroscience

    A computational intelligence analysis of G proteincoupled receptor sequinces for pharmacoproteomic applications

    Get PDF
    Arguably, drug research has contributed more to the progress of medicine during the past decades than any other scientific factor. One of the main areas of drug research is related to the analysis of proteins. The world of pharmacology is becoming increasingly dependent on the advances in the fields of genomics and proteomics. This dependency brings about the challenge of finding robust methods to analyze the complex data they generate. Such challenge invites us to go one step further than traditional statistics and resort to approaches under the conceptual umbrella of artificial intelligence, including machine learning (ML), statistical pattern recognition and soft computing methods. Sound statistical principles are essential to trust the evidence base built through the use of such approaches. Statistical ML methods are thus at the core of the current thesis. More than 50% of drugs currently available target only four key protein families, from which almost a 30% correspond to the G Protein-Coupled Receptors (GPCR) superfamily. This superfamily regulates the function of most cells in living organisms and is at the centre of the investigations reported in the current thesis. No much is known about the 3D structure of these proteins. Fortunately, plenty of information regarding their amino acid sequences is readily available. The automatic grouping and classification of GPCRs into families and these into subtypes based on sequence analysis may significantly contribute to ascertain the pharmaceutically relevant properties of this protein superfamily. There is no biologically-relevant manner of representing the symbolic sequences describing proteins using real-valued vectors. This does not preclude the possibility of analyzing them using principled methods. These may come, amongst others, from the field of statisticalML. Particularly, kernel methods can be used to this purpose. Moreover, the visualization of high-dimensional protein sequence data can be a key exploratory tool for finding meaningful information that might be obscured by their intrinsic complexity. That is why the objective of the research described in this thesis is twofold: first, the design of adequate visualization-oriented artificial intelligence-based methods for the analysis of GPCR sequential data, and second, the application of the developed methods in relevant pharmacoproteomic problems such as GPCR subtyping and protein alignment-free analysis.Se podría decir que la investigación farmacológica ha desempeñado un papel predominante en el avance de la medicina a lo largo de las últimas décadas. Una de las áreas principales de investigación farmacológica es la relacionada con el estudio de proteínas. La farmacología depende cada vez más de los avances en genómica y proteómica, lo que conlleva el reto de diseñar métodos robustos para el análisis de los datos complejos que generan. Tal reto nos incita a ir más allá de la estadística tradicional para recurrir a enfoques dentro del campo de la inteligencia artificial, incluyendo el aprendizaje automático y el reconocimiento de patrones estadístico, entre otros. El uso de principios sólidos de teoría estadística es esencial para confiar en la base de evidencia obtenida mediante estos enfoques. Los métodos de aprendizaje automático estadístico son uno de los fundamentos de esta tesis. Más del 50% de los fármacos en uso hoy en día tienen como ¿diana¿ apenas cuatro familias clave de proteínas, de las que un 30% corresponden a la super-familia de los G-Protein Coupled Receptors (GPCR). Los GPCR regulan la funcionalidad de la mayoría de las células y son el objetivo central de la tesis. Se desconoce la estructura 3D de la mayoría de estas proteínas, pero, en cambio, hay mucha información disponible de sus secuencias de amino ácidos. El agrupamiento y clasificación automáticos de los GPCR en familias, y de éstas a su vez en subtipos, en base a sus secuencias, pueden contribuir de forma significativa a dilucidar aquellas de sus propiedades de interés farmacológico. No hay forma biológicamente relevante de representar las secuencias simbólicas de las proteínas mediante vectores reales. Esto no impide que se puedan analizar con métodos adecuados. Entre estos se cuentan las técnicas provenientes del aprendizaje automático estadístico y, en particular, los métodos kernel. Por otro lado, la visualización de secuencias de proteínas de alta dimensionalidad puede ser una herramienta clave para la exploración y análisis de las mismas. Es por ello que el objetivo central de la investigación descrita en esta tesis se puede desdoblar en dos grandes líneas: primero, el diseño de métodos centrados en la visualización y basados en la inteligencia artificial para el análisis de los datos secuenciales correspondientes a los GPCRs y, segundo, la aplicación de los métodos desarrollados a problemas de farmacoproteómica tales como la subtipificación de GPCRs y el análisis de proteinas no-alineadas
    corecore