945 research outputs found

    Discovering structure without labels

    Get PDF
    The scarcity of labels combined with an abundance of data makes unsupervised learning more attractive than ever. Without annotations, inductive biases must guide the identification of the most salient structure in the data. This thesis contributes to two aspects of unsupervised learning: clustering and dimensionality reduction. The thesis falls into two parts. In the first part, we introduce Mod Shift, a clustering method for point data that uses a distance-based notion of attraction and repulsion to determine the number of clusters and the assignment of points to clusters. It iteratively moves points towards crisp clusters like Mean Shift but also has close ties to the Multicut problem via its loss function. As a result, it connects signed graph partitioning to clustering in Euclidean space. The second part treats dimensionality reduction and, in particular, the prominent neighbor embedding methods UMAP and t-SNE. We analyze the details of UMAP's implementation and find its actual loss function. It differs drastically from the one usually stated. This discrepancy allows us to explain some typical artifacts in UMAP plots, such as the dataset size-dependent tendency to produce overly crisp substructures. Contrary to existing belief, we find that UMAP's high-dimensional similarities are not critical to its success. Based on UMAP's actual loss, we describe its precise connection to the other state-of-the-art visualization method, t-SNE. The key insight is a new, exact relation between the contrastive loss functions negative sampling, employed by UMAP, and noise-contrastive estimation, which has been used to approximate t-SNE. As a result, we explain that UMAP embeddings appear more compact than t-SNE plots due to increased attraction between neighbors. Varying the attraction strength further, we obtain a spectrum of neighbor embedding methods, encompassing both UMAP- and t-SNE-like versions as special cases. Moving from more attraction to more repulsion shifts the focus of the embedding from continuous, global to more discrete and local structure of the data. Finally, we emphasize the link between contrastive neighbor embeddings and self-supervised contrastive learning. We show that different flavors of contrastive losses can work for both of them with few noise samples

    A Brief Introduction to Machine Learning for Engineers

    Full text link
    This monograph aims at providing an introduction to key concepts, algorithms, and theoretical results in machine learning. The treatment concentrates on probabilistic models for supervised and unsupervised learning problems. It introduces fundamental concepts and algorithms by building on first principles, while also exposing the reader to more advanced topics with extensive pointers to the literature, within a unified notation and mathematical framework. The material is organized according to clearly defined categories, such as discriminative and generative models, frequentist and Bayesian approaches, exact and approximate inference, as well as directed and undirected models. This monograph is meant as an entry point for researchers with a background in probability and linear algebra.Comment: This is an expanded and improved version of the original posting. Feedback is welcom

    Regularization, Adaptation and Generalization of Neural Networks

    Get PDF
    The ability to generalize to unseen data is one of the fundamental, desired properties in a learning system. This thesis reports dierent research eorts in improving the generalization properties of machine learning systems at dierent levels, focusing on neural networks for computer vision tasks. First, a novel regularization method is presented, Curriculum Dropout. It combines Curriculum Learning and Dropout, and shows better regularization eects than the original algorithm in a variety of tasks, without requiring substantially any additional implementation eorts. While regularization methods are extremely powerful to better generalize to unseen data from the same distribution as the training one, they are not very successful in mitigating the dataset bias issue. This problem constitutes in models learning the peculiarities of the training set, and poorly generalizing to unseen domains. Unsupervised domain adaptation has been one of the main solutions to this problem. Two novel adaptation approaches are presented in this thesis. First, we introduce the DIFA algorithm, which combines domain invariance and feature augmentation to better adapt models to new domains by relying on adversarial training. Next, we propose an original procedure that exploits the \mode collapse" behavior of Generative Adversarial Networks. Finally, the general applicability of domain adaptation algorithms is questioned (due to the assumptions of knowing the target distribution a priori and being able to sample from it). A novel framework is presented to overcome its liabilities, where the goal is to generalize to unseen domains by relying only on data from a single source distribution. We face this problem through the lens of robust statistics, dening a worst-case formulation where the model parameters are optimized with respect to populations which are -distant from the source domain on a semantic space

    A precise bare simulation approach to the minimization of some distances. Foundations

    Full text link
    In information theory -- as well as in the adjacent fields of statistics, machine learning, artificial intelligence, signal processing and pattern recognition -- many flexibilizations of the omnipresent Kullback-Leibler information distance (relative entropy) and of the closely related Shannon entropy have become frequently used tools. To tackle corresponding constrained minimization (respectively maximization) problems by a newly developed dimension-free bare (pure) simulation method, is the main goal of this paper. Almost no assumptions (like convexity) on the set of constraints are needed, within our discrete setup of arbitrary dimension, and our method is precise (i.e., converges in the limit). As a side effect, we also derive an innovative way of constructing new useful distances/divergences. To illustrate the core of our approach, we present numerous examples. The potential for widespread applicability is indicated, too; in particular, we deliver many recent references for uses of the involved distances/divergences and entropies in various different research fields (which may also serve as an interdisciplinary interface)
    • 

    corecore