381 research outputs found

    A study of the classification of low-dimensional data with supervised manifold learning

    Full text link
    Supervised manifold learning methods learn data representations by preserving the geometric structure of data while enhancing the separation between data samples from different classes. In this work, we propose a theoretical study of supervised manifold learning for classification. We consider nonlinear dimensionality reduction algorithms that yield linearly separable embeddings of training data and present generalization bounds for this type of algorithms. A necessary condition for satisfactory generalization performance is that the embedding allow the construction of a sufficiently regular interpolation function in relation with the separation margin of the embedding. We show that for supervised embeddings satisfying this condition, the classification error decays at an exponential rate with the number of training samples. Finally, we examine the separability of supervised nonlinear embeddings that aim to preserve the low-dimensional geometric structure of data based on graph representations. The proposed analysis is supported by experiments on several real data sets

    Deep Learning for Inverse Problems: Performance Characterizations, Learning Algorithms, and Applications

    Get PDF
    Deep learning models have witnessed immense empirical success over the last decade. However, in spite of their widespread adoption, a profound understanding of the generalization behaviour of these over-parameterized architectures is still missing. In this thesis, we provide one such way via a data-dependent characterizations of the generalization capability of deep neural networks based data representations. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the cardinality of the training set, and the Lipschitz properties of a deep neural network. We then specialize our analysis to a specific class of model based regression problems, namely the inverse problems. These problems often come with well defined forward operators that map variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. We offer a generalisation error bound that -- apart from the other factors -- depends on the Jacobian of the composition of the forward operator with the neural network. Motivated by our analysis, we then propose a `plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also provide a method allowing us to tightly upper bound the norms of the Jacobians of the relevant operators that is much more {computationally} efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing setup and inverse problems that are of interest in the biomedical imaging setup

    Metric Learning with Lipschitz Continuous Functions

    Get PDF
    Classification is a fundamental problem in the field of statistical machine learning. In classification, issues of nonlinear separability and multimodality are frequently encountered even in relatively small data sets. Distance-based classifiers, such as the nearest neighbour (NN) classifier which classifies a new instance by computing distances between this instance and the training instances, have been found useful to deal with nonlinear separability and multimodality. However, the performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to study metric learning, which enables the algorithms to automatically learn a suitable metric from available data. In this thesis, I discuss the topic of metric learning with Lipschitz continuous functions. The classifiers are restricted to have certain Lipschitz continuous properties, so that the performance guarantee of classifiers, which could be described by probably approximately correct (PAC) learning bounds, would be obtained. In Chapter 2, I propose a framework in which the metric would be learned with the criterion of large margin ratio. Both inter-class margin and intra-class dispersion are considered in the criterion, so as to enhance the generalisation ability of classifiers. Some well-known metric learning algorithms can be shown as special cases of the proposed framework. In Chapter 3, I suggest that multiple local metrics would be learned to deal with multimodality problems. I define an intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning method for distance-based classification. The key intuition is to partition the metric space into influential regions and a background region, and then regulate the effectiveness of each local metric to be within the related influential regions. In Chapter 4, metric learning with instance extraction (MLIE) is discussed. A big drawback of the NN classifier is that it needs to store all training instances, hence it suffers from problems of storage and computation. Therefore, I propose an algorithm to extract a small number of useful instances, which would reduce the costs of storage as well as the computation costs during the test stage. Furthermore, the proposed instance extraction method could be understood as an elegant way to do local linear classification, i.e. simultaneously learn the positions of local areas and the linear classifiers inside the local areas. In Chapter 5, based on an algorithm-dependent PAC bound, another algorithm of MLIE is proposed. Besides the Lipschitz continuous requirement with respect to the parameter, the Lipschitz continuous requirement with respect to the gradient of parameter will also be considered. Therefore, smooth classifiers and smooth loss functions are proposed in this chapter. The classifiers proposed in Chapter 2 and Chapter 3 have bounded values of lip(h x) with a PAC bound, where lip(h x) denotes the Lipschitz constant of the function with respect to the input space X. The classifiers proposed in Chapter 4 enjoys the bounded value of lip(h ) with a tighter PAC bound, where lip(h ) denotes the Lipschitz constant of the function with respect to the input space . In Chapter 5, to consider the property of the optimisation algorithm simultaneously, an algorithm-dependent PAC bound based on Lipschitz smoothness is derived
    • …
    corecore