773 research outputs found

    Modeling Dependencies in Natural Languages with Latent Variables

    Get PDF
    In this thesis, we investigate the use of latent variables to model complex dependencies in natural languages. Traditional models, which have a fixed parameterization, often make strong independence assumptions that lead to poor performance. This problem is often addressed by incorporating additional dependencies into the model (e.g., using higher order N-grams for language modeling). These added dependencies can increase data sparsity and/or require expert knowledge, together with trial and error, in order to identify and incorporate the most important dependencies (as in lexicalized parsing models). Traditional models, when developed for a particular genre, domain, or language, are also often difficult to adapt to another. In contrast, previous work has shown that latent variable models, which automatically learn dependencies in a data-driven way, are able to flexibly adjust the number of parameters based on the type and the amount of training data available. We have created several different types of latent variable models for a diverse set of natural language processing applications, including novel models for part-of-speech tagging, language modeling, and machine translation, and an improved model for parsing. These models perform significantly better than traditional models. We have also created and evaluated three different methods for improving the performance of latent variable models. While these methods can be applied to any of our applications, we focus our experiments on parsing. The first method involves self-training, i.e., we train models using a combination of gold standard training data and a large amount of automatically labeled training data. We conclude from a series of experiments that the latent variable models benefit much more from self-training than conventional models, apparently due to their flexibility to adjust their model parameterization to learn more accurate models from the additional automatically labeled training data. The second method takes advantage of the variability among latent variable models to combine multiple models for enhanced performance. We investigate several different training protocols to combine self-training with model combination. We conclude that these two techniques are complementary to each other and can be effectively combined to train very high quality parsing models. The third method replaces the generative multinomial lexical model of latent variable grammars with a feature-rich log-linear lexical model to provide a principled solution to address data sparsity, handle out-of-vocabulary words, and exploit overlapping features during model induction. We conclude from experiments that the resulting grammars are able to effectively parse three different languages. This work contributes to natural language processing by creating flexible and effective latent variable models for several different languages. Our investigation of self-training, model combination, and log-linear models also provides insights into the effective application of these machine learning techniques to other disciplines

    The hyperspace of the regions below of continuous maps is homeomorphic to c0

    Get PDF
    AbstractFor a compact metric space (X,d), we use ↓USC(X) and ↓C(X) to denote the families of the regions below of all upper semi-continuous maps and the regions below of all continuous maps from X to I=[0,1], respectively. In this paper, we consider the two spaces topologized as subspaces of the hyperspace Cld(X×I) consisting of all non-empty closed sets in X×I endowed with the Vietoris topology. We shall show that ↓C(X) is Baire if and only if the set of isolated points is dense in X, but ↓C(X) is not a Gδσ-set in ↓USC(X) unless X is finite. As the main result, we shall prove that if X is an infinite locally connected compact metric space then (↓USC(X),↓C(X))≈(Q,c0), where Q=[−1,1]ω is the Hilbert cube and c0={(xn)∈Q:limn→∞xn=0}

    Image Segmentation Based on Intuitionistic Type-2 FCM Algorithm

    Get PDF
    Due to using the fuzzy clustering algorithm, the accuracy of image segmentation is not high enough. So one hybrid clustering algorithm combined with intuitionistic fuzzy factor and local spatial information is proposed. Experimental results show that the proposed algorithm is superior to other methods in image segmentation accuracy and improves the robustness of the algorithm

    State Feedback H

    Get PDF
    A new state feedback H∞ control scheme is presented used in the boiler-turbine power units based on an improved particle swarm optimizing algorithm. Firstly, the nonlinear system is transformed into a linear time-varying system; then the H∞ control problem is transformed into the solution of a Riccati equation. The control effect of H∞ controller depends on the selection of matrix P, so an improved particle swarm optimizing (PSO) algorithm by introducing differential evolution algorithm is used to solve the Riccati equation. The main purpose is that mutation and crossover are introduced for a new population, and the population diversity is improved. It is beneficial to eliminate stagnation caused by premature convergence, and the algorithm convergence rate is improved. Finally, the real-time optimizing of the controller parameters is realized. Theoretical analysis and simulation results show that a state feedback H∞ controller can be obtained, which can ensure asymptotic stability of the system, and the double objectives of stabilizing system and suppressing the disturbance are got. The system can work well over a large range working point
    • …
    corecore