15 research outputs found

    Hey there's DALILA: a DictionAry LearnIng LibrAry

    Get PDF
    Dictionary Learning and Representation Learning are machine learning methods for decomposition, denoising and reconstruction of data with a wide range of applications such as text recognition, image processing and biological processes understanding. In this work we present DALILA, a scientific Python library for regularised dictionary learning and regularised representation learning that allows to impose prior knowledge, if available. DALILA, differently from the others available libraries for this purpose, is flexible and modular. DALILA is designed to be easily extended for custom needs. Moreover, it is compliant with the most widespread ML Python library and this allows for a straightforward usage and integration. We here present and discuss the theoretical aspects and discuss its strength points and implementation

    Multi-task multiple kernel learning reveals relevant frequency bands for critical areas localization in focal epilepsy

    Get PDF
    The localization of epileptic zone in pharmacoresistant focal epileptic patients is a daunting task, typically performed by medical experts through visual inspection over highly sampled neural recordings. For a finer localization of the epileptogenic areas and a deeper understanding of the pathology both the identification of pathogenical biomarkers and the automatic characterization of epileptic signals are desirable. In this work we present a data integration learning method based on multi-level representation of stereo-electroencephalography recordings and multiple kernel learning. To the best of our knowledge, this is the first attempt to tackle both aspects simultaneously, as our approach is devised to classify critical vs. non-critical recordings while detecting the most discriminative frequency bands. The learning pipeline is applied to a data set of 18 patients for a total of 2347 neural recordings analyzed by medical experts. Without any prior knowledge assumption, the data-driven method reveals the most discriminative frequency bands for the localization of epileptic areas in the high-frequency spectrum (>=80 Hz) while showing high performance metric scores (mean balanced accuracy of 0.89 +- 0.03). The promising results may represent a starting point for the automatic search of clinical biomarkers of epileptogenicity

    Secondary Somatic Mutations in G-Protein-Related Pathways and Mutation Signatures in Uveal Melanoma

    Get PDF
    Background: Uveal melanoma (UM), a rare cancer of the eye, is characterized by initiating mutations in the genes G-protein subunit alpha Q (GNAQ), G-protein subunit alpha 11 (GNA11), cysteinyl leukotriene receptor 2 (CYSLTR2), and phospholipase C beta 4 (PLCB4) and by metastasis-promoting mutations in the genes splicing factor 3B1 (SF3B1), serine and arginine rich splicing factor 2 (SRSF2), and BRCA1-associated protein 1 (BAP1). Here, we tested the hypothesis that additional mutations, though occurring in only a few cases (\u201csecondary drivers\u201d), might influence tumor development. Methods: We analyzed all the 4125 mutations detected in exome sequencing datasets, comprising a total of 139 Ums, and tested the enrichment of secondary drivers in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways that also contained the initiating mutations. We searched for additional mutations in the putative secondary driver gene protein tyrosine kinase 2 beta (PTK2B) and we developed new mutational signatures that explain the mutational pattern observed in UM. Results: Secondary drivers were significantly enriched in KEGG pathways that also contained GNAQ and GNA11, such as the calcium-signaling pathway. Many of the secondary drivers were known cancer driver genes and were strongly associated with metastasis and survival. We identified additional mutations in PTK2B. Sparse dictionary learning allowed for the identification of mutational signatures specific for UM. Conclusions: A considerable part of rare mutations that occur in addition to known driver mutations are likely to affect tumor development and progression

    Missing Values in Multiple Joint Inference of Gaussian Graphical Models

    No full text
    Real-world phenomena are often not fully measured or completely observable, raising the so-called missing data problem. As a consequence, the need of developing ad-hoc techniques that cope with such issue arises in many inference contexts. In this paper, we focus on the inference of Gaussian Graphical Models (GGMs) from multiple input datasets having complex relationships (e.g. multi-class or temporal). We propose a method that generalises state-of-the-art approaches to the inference of both multi-class and temporal GGMs while naturally dealing with two types of missing data: partial and latent. Synthetic experiments show that our performance is better than state-of-the-art. In particular, we compared results with single network inference methods that suitably deal with missing data, and multiple joint network inference methods coupled with standard pre-processing techniques (e.g. imputing). When dealing with fully observed datasets our method analytically reduces to state-of-the-art approaches providing a good alternative as our implementation reaches convergence in shorter or comparable time. Finally, we show that properly addressing the missing data problem in a multi-class real-world example, allows us to discover interesting varying patterns

    Where Do We Stand in Regularization for Life Science Studies?

    No full text
    : More and more biologists and bioinformaticians turn to machine learning to analyze large amounts of data. In this context, it is crucial to understand which is the most suitable data analysis pipeline for achieving reliable results. This process may be challenging, due to a variety of factors, the most crucial ones being the data type and the general goal of the analysis (e.g., explorative or predictive). Life science data sets require further consideration as they often contain measures with a low signal-to-noise ratio, high-dimensional observations, and relatively few samples. In this complex setting, regularization, which can be defined as the introduction of additional information to solve an ill-posed problem, is the tool of choice to obtain robust models. Different regularization practices may be used depending both on characteristics of the data and of the question asked, and different choices may lead to different results. In this article, we provide a comprehensive description of the impact and importance of regularization techniques in life science studies. In particular, we provide an intuition of what regularization is and of the different ways it can be implemented and exploited. We propose four general life sciences problems in which regularization is fundamental and should be exploited for robustness. For each of these large families of problems, we enumerate different techniques as well as examples and case studies. Lastly, we provide a unified view of how to approach each data type with various regularization techniques
    corecore