118 research outputs found

    On conditional random fields: applications, feature selection, parameter estimation and hierarchical modelling

    Get PDF
    There has been a growing interest in stochastic modelling and learning with complex data, whose elements are structured and interdependent. One of the most successful methods to model data dependencies is graphical models, which is a combination of graph theory and probability theory. This thesis focuses on a special type of graphical models known as Conditional Random Fields (CRFs) (Lafferty et al., 2001), in which the output state spaces, when conditioned on some observational input data, are represented by undirected graphical models. The contributions of thesis involve both (a) broadening the current applicability of CRFs in the real world and (b) deepening the understanding of theoretical aspects of CRFs. On the application side, we empirically investigate the applications of CRFs in two real world settings. The first application is on a novel domain of Vietnamese accent restoration, in which we need to restore accents of an accent-less Vietnamese sentence. Experiments on half a million sentences of news articles show that the CRF-based approach is highly accurate. In the second application, we develop a new CRF-based movie recommendation system called Preference Network (PN). The PN jointly integrates various sources of domain knowledge into a large and densely connected Markov network. We obtained competitive results against well-established methods in the recommendation field.On the theory side, the thesis addresses three important theoretical issues of CRFs: feature selection, parameter estimation and modelling recursive sequential data. These issues are all addressed under a general setting of partial supervision in that training labels are not fully available. For feature selection, we introduce a novel learning algorithm called AdaBoost.CRF that incrementally selects features out of a large feature pool as learning proceeds. AdaBoost.CRF is an extension of the standard boosting methodology to structured and partially observed data. We demonstrate that the AdaBoost.CRF is able to eliminate irrelevant features and as a result, returns a very compact feature set without significant loss of accuracy. Parameter estimation of CRFs is generally intractable in arbitrary network structures. This thesis contributes to this area by proposing a learning method called AdaBoost.MRF (which stands for AdaBoosted Markov Random Forests). As learning proceeds AdaBoost.MRF incrementally builds a tree ensemble (a forest) that cover the original network by selecting the best spanning tree at a time. As a result, we can approximately learn many rich classes of CRFs in linear time. The third theoretical work is on modelling recursive, sequential data in that each level of resolution is a Markov sequence, where each state in the sequence is also a Markov sequence at the finer grain. One of the key contributions of this thesis is Hierarchical Conditional Random Fields (HCRF), which is an extension to the currently popular sequential CRF and the recent semi-Markov CRF (Sarawagi and Cohen, 2004). Unlike previous CRF work, the HCRF does not assume any fixed graphical structures.Rather, it treats structure as an uncertain aspect and it can estimate the structure automatically from the data. The HCRF is motivated by Hierarchical Hidden Markov Model (HHMM) (Fine et al., 1998). Importantly, the thesis shows that the HHMM is a special case of HCRF with slight modification, and the semi-Markov CRF is essentially a flat version of the HCRF. Central to our contribution in HCRF is a polynomial-time algorithm based on the Asymmetric Inside Outside (AIO) family developed in (Bui et al., 2004) for learning and inference. Another important contribution is to extend the AIO family to address learning with missing data and inference under partially observed labels. We also derive methods to deal with practical concerns associated with the AIO family, including numerical overflow and cubic-time complexity. Finally, we demonstrate good performance of HCRF against rivals on two applications: indoor video surveillance and noun-phrase chunking

    Masked Language Model Scoring

    Full text link
    Pretrained masked language models (MLMs) require finetuning for most NLP tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood scores (PLLs), which are computed by masking tokens one by one. We show that PLLs outperform scores from autoregressive language models like GPT-2 in a variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an end-to-end LibriSpeech model's WER by 30% relative and adds up to +1.7 BLEU on state-of-the-art baselines for low-resource translation pairs, with further gains from domain adaptation. We attribute this success to PLL's unsupervised expression of linguistic acceptability without a left-to-right bias, greatly improving on scores from GPT-2 (+10 points on island effects, NPI licensing in BLiMP). One can finetune MLMs to give scores without masking, enabling computation in a single inference pass. In all, PLLs and their associated pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of pretrained MLMs; e.g., we use a single cross-lingual model to rescore translations in multiple languages. We release our library for language model scoring at https://github.com/awslabs/mlm-scoring.Comment: ACL 2020 camera-ready (presented July 2020

    Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.

    Get PDF
    The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed

    Mass spectrometry data mining for cancer detection

    Get PDF
    Early detection of cancer is crucial for successful intervention strategies. Mass spectrometry-based high throughput proteomics is recognized as a major breakthrough in cancer detection. Many machine learning methods have been used to construct classifiers based on mass spectrometry data for discriminating between cancer stages, yet, the classifiers so constructed generally lack biological interpretability. To better assist clinical uses, a key step is to discover ”biomarker signature profiles”, i.e. combinations of a small number of protein biomarkers strongly discriminating between cancer states. This dissertation introduces two innovative algorithms to automatically search for a signature and to construct a high-performance signature-based classifier for cancer discrimination tasks based on mass spectrometry data, such as data acquired by MALDI or SELDI techniques. Our first algorithm assumes that homogeneous groups of mass spectra can be modeled by (unknown) Gibbs distributions to generate an optimal signature and an associated signature-based classifier by robust log-likelihood analysis; our second algorithm uses a stochastic optimization algorithm to search for two lists of biomarkers, and then constructs a signature-based classifier. To support these two algorithms theoretically, this dissertation also studies the empirical probability distributions of mass spectrometry data and implements the actual fitting of Markov random fields to these high-dimensional distributions. We have validated our two signature discovery algorithms on several mass spectrometry datasets related to ovarian cancer and to colorectal cancer patients groups. For these cancer discrimination tasks, our algorithms have yielded better classification performances than existing machine learning algorithms and in addition,have generated more interpretable explicit signatures.Mathematics, Department o

    Generative models : a critical review

    Full text link
    Dans cette thèse, nous introduisons et motivons la modélisation générative comme une tâche centrale pour l’apprentissage automatique et fournissons une vue critique des algorithmes qui ont été proposés pour résoudre cette tâche. Nous montrons comment la modélisation générative peut être définie mathématiquement en essayant de faire une distribution d’estimation identique à une distribution de vérité de terrain inconnue. Ceci peut ensuite être quantifié en termes de valeur d’une divergence statistique entre les deux distributions. Nous décrivons l’approche du maximum de vraisemblance et comment elle peut être interprétée comme minimisant la divergence KL. Nous explorons un certain nombre d’approches dans la famille du maximum de vraisemblance, tout en discutant de leurs limites. Enfin, nous explorons l’approche antagoniste alternative qui consiste à étudier les différences entre une distribution d’estimation et une distribution de données réelles. Nous discutons de la façon dont cette approche peut donner lieu à de nouvelles divergences et méthodes qui sont nécessaires pour réussir l’apprentissage par l’adversité. Nous discutons également des nouveaux paramètres d’évaluation requis par l’approche contradictoire. Le chapitre ref chap: fortnet montre qu’en apprenant des modèles génératifs des couches cachées d’un réseau profond, on peut identifier quand le réseau fonctionne sur des données différentes des données observées pendant la formation. Cela nous permet d’étudier les différences entre les modes de fonctionnement libre et de forçage des enseignants dans les réseaux récurrents. Cela conduit également à une meilleure robustesse face aux attaques adverses. Le chapitre ref chap: gibbsnet a exploré une procédure itérative pour la génération et l’inférence dans les réseaux profonds, qui est inspirée par la procédure MCMC de gibbs bloquées pour l’échantillonnage à partir de modèles basés sur l’énergie. Cela permet d’améliorer l’inpainting, la génération et l’inférence en supprimant l’exigence que les variables a priori sur les variables latentes aient une distribution connue. Le chapitre ref chap: discreg a étudié si les modèles génératifs pouvaient être améliorés en exploitant les connaissances acquises par des modèles de classification discriminants. Nous avons étudié cela en augmentant les autoencoders avec des pertes supplémentaires définies dans les états cachés d’un classificateur fixe. Dans la pratique, nous avons montré que cela conduisait à des modèles générateurs mettant davantage l’accent sur les aspects saillants des données, et discutait également des limites de cette approche.In this thesis we introduce and motivate generative modeling as a central task for machine learning and provide a critical view of the algorithms which have been proposed for solving this task. We overview how generative modeling can be de ned mathematically as trying to make an estimating distribution the same as an unknown ground truth distribution. This can then be quanti ed in terms of the value of a statistical divergence between the two distributions. We outline the maximum likelihood approach and how it can be interpreted as minimizing KL-divergence. We explore a number of approaches in the maximum likelihood family, while discussing their limitations. Finally, we explore the alternative adversarial approach which involves studying the di erences between an estimating distribution and a real data distribution. We discuss how this approach can give rise to new divergences and methods that are necessary to make adversarial learning successful. We also discuss new evaluation metrics which are required by the adversarial approach. Chapter 2 shows that by learning generative models of the hidden layers of a deep network can identify when the network is being run on data di ering from the data seen during training. This allows us to study di erences between freerunning and teacher forcing modes in recurrent networks. It also leads to improved robustness to adversarial attacks. Chapter 3 explored an iterative procedure for generation and inference in deep networks, which is inspired by the blocked gibbs MCMC procedure for sampling from energy-based models. This achieves improved inpainting, generation, and inference by removing the requirement that the prior over the latent variables have a known distribution. Chapter 4 studied whether generative models could be improved by exploiting the knowledge learned by discriminative classi cation models. We studied this by augmenting autoencoders with additional losses de ned in the hidden states of a xed classi er. In practice we showed that this led to generative models with better focus on salient aspects of the data, and also discussed limitations in this approach

    Automatic signal and image-based assessments of spinal cord injury and treatments.

    Get PDF
    Spinal cord injury (SCI) is one of the most common sources of motor disabilities in humans that often deeply impact the quality of life in individuals with severe and chronic SCI. In this dissertation, we have developed advanced engineering tools to address three distinct problems that researchers, clinicians and patients are facing in SCI research. Particularly, we have proposed a fully automated stochastic framework to quantify the effects of SCI on muscle size and adipose tissue distribution in skeletal muscles by volumetric segmentation of 3-D MRI scans in individuals with chronic SCI as well as non-disabled individuals. We also developed a novel framework for robust and automatic activation detection, feature extraction and visualization of the spinal cord epidural stimulation (scES) effects across a high number of scES parameters to build individualized-maps of muscle recruitment patterns of scES. Finally, in the last part of this dissertation, we introduced an EMG time-frequency analysis framework that implements EMG spectral analysis and machine learning tools to characterize EMG patterns resulting in independent or assisted standing enabled by scES, and identify the stimulation parameters that promote muscle activation patterns more effective for standing. The neurotechnological advancements proposed in this dissertation have greatly benefited SCI research by accelerating the efforts to quantify the effects of SCI on muscle size and functionality, expanding the knowledge regarding the neurophysiological mechanisms involved in re-enabling motor function with epidural stimulation and the selection of stimulation parameters and helping the patients with complete paralysis to achieve faster motor recovery

    Probabilistic learning: Sparsity and non-decomposable losses

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Statistical inference from large-scale genomic data

    Get PDF
    This thesis explores the potential of statistical inference methodologies in their applications in functional genomics. In essence, it summarises algorithmic findings in this field, providing step-by-step analytical methodologies for deciphering biological knowledge from large-scale genomic data, mainly microarray gene expression time series. This thesis covers a range of topics in the investigation of complex multivariate genomic data. One focus involves using clustering as a method of inference and another is cluster validation to extract meaningful biological information from the data. Information gained from the application of these various techniques can then be used conjointly in the elucidation of gene regulatory networks, the ultimate goal of this type of analysis. First, a new tight clustering method for gene expression data is proposed to obtain tighter and potentially more informative gene clusters. Next, to fully utilise biological knowledge in clustering validation, a validity index is defined based on one of the most important ontologies within the Bioinformatics community, Gene Ontology. The method bridges a gap in current literature, in the sense that it takes into account not only the variations of Gene Ontology categories in biological specificities and their significance to the gene clusters, but also the complex structure of the Gene Ontology. Finally, Bayesian probability is applied to making inference from heterogeneous genomic data, integrated with previous efforts in this thesis, for the aim of large-scale gene network inference. The proposed system comes with a stochastic process to achieve robustness to noise, yet remains efficient enough for large-scale analysis. Ultimately, the solutions presented in this thesis serve as building blocks of an intelligent system for interpreting large-scale genomic data and understanding the functional organisation of the genome
    corecore