4 research outputs found

    Proceedings of the Fifth Meeting on Mathematics of Language : MOL5

    Get PDF

    Proceedings of the Fifth Meeting on Mathematics of Language : MOL5

    Get PDF

    Learning from Partially Labeled Data: Unsupervised and Semi-supervised Learning on Graphs and Learning with Distribution Shifting

    Get PDF
    This thesis focuses on two fundamental machine learning problems:unsupervised learning, where no label information is available, and semi-supervised learning, where a small amount of labels are given in addition to unlabeled data. These problems arise in many real word applications, such as Web analysis and bioinformatics,where a large amount of data is available, but no or only a small amount of labeled data exists. Obtaining classification labels in these domains is usually quite difficult because it involves either manual labeling or physical experimentation. This thesis approaches these problems from two perspectives: graph based and distribution based. First, I investigate a series of graph based learning algorithms that are able to exploit information embedded in different types of graph structures. These algorithms allow label information to be shared between nodes in the graph---ultimately communicating information globally to yield effective unsupervised and semi-supervised learning. In particular, I extend existing graph based learning algorithms, currently based on undirected graphs, to more general graph types, including directed graphs, hypergraphs and complex networks. These richer graph representations allow one to more naturally capture the intrinsic data relationships that exist, for example, in Web data, relational data, bioinformatics and social networks. For each of these generalized graph structures I show how information propagation can be characterized by distinct random walk models, and then use this characterization to develop new unsupervised and semi-supervised learning algorithms. Second, I investigate a more statistically oriented approach that explicitly models a learning scenario where the training and test examples come from different distributions. This is a difficult situation for standard statistical learning approaches, since they typically incorporate an assumption that the distributions for training and test sets are similar, if not identical. To achieve good performance in this scenario, I utilize unlabeled data to correct the bias between the training and test distributions. A key idea is to produce resampling weights for bias correction by working directly in a feature space and bypassing the problem of explicit density estimation. The technique can be easily applied to many different supervised learning algorithms, automatically adapting their behavior to cope with distribution shifting between training and test data

    Inclusional Theories in Declarative Programming

    No full text
    When studying specific deduction techniques and strategies for operational semantics of logic programming languages special emphasis was put on the equality relation, due to its interest in a variety of different domains. But recently special emphasis has been put on partial order relations, and specifically on inclusions, as a basis for several different specification frameworks. It is therefore attractive to work towards a logic programming language which deals efficiently with inclusions, and which may be useful as a rapid prototyping tool. Term rewriting appears to be a suitable technique for theorem proving with inclusional theories, since it naturally applies to arbitrary (possibly non-symmetric) transitive relations, but turns out to be impractical in general. Therefore several restrictions need to be put on inclusional theories in order to improve the inference mechanism and to define efficient deduction strategies, which could be used in an operational semantics of an inclusio..
    corecore