9 research outputs found

    Learning Local Metrics and Influential Regions for Classification

    Get PDF
    The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data. To address the problem of multimodality, it is desirable to learn local metrics. In this short paper, we define a new intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning method for distance-based classification. Our key intuition is to partition the metric space into influential regions and a background region, and then regulate the effectiveness of each local metric to be within the related influential regions. We learn local metrics and influential regions to reduce the empirical hinge loss, and regularize the parameters on the basis of a resultant learning bound. Encouraging experimental results are obtained from various public and popular data sets

    Beyond linear similarity function learning

    Get PDF
    Being able to measure the similarity between two patterns is an underlying task in many machine learning and data mining applications. However, handcrafting an effective similarity function for a specific application is difficult and tedious. This observation has led to the emergence of the topic of similarity function learning in the machine learning community. It consists in designing algorithms that automatically learn a similarity function from a set of labeled data. In this thesis, we explore advanced similarity function concepts: local metric, deep metric learning and computing similarity with data uncertainty. Linear metric learning is a widely used methodology to learn a similarity function from a set of similar/dissimilar example pairs. Using a single linear metric may be a too restrictive assumption when handling heterogeneous datasets. Lately, local metric learning methods have been introduced to overcome this limitation. However, most methods are subject to constraints preventing their usage in many applications. For example, some require the knowledge of all possible class labels during training. In this thesis, we present a novel local metric learning method, which overcomes some limitations of previous approaches. Deep learning has become a major topic in machine learning. Over the last few years, it has been successfully applied to various machine learning tasks such as classification or regression. In this thesis, we illustrate how neural networks can be used to learn similarity functions which surpass linear and local metric learning methods. Often, similarity functions have to deal with noisy feature vectors. In this context, standard similarity learning methods may result in unsatisfactory performance. In this thesis, we propose a method which leverages additional information on the noise magnitude to outperform standard methods

    Learning local metrics and influential regions for classification

    Get PDF
    The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data. To address the problem of multimodality, it is desirable to learn local metrics. In this short paper, we define a new intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning algorithm called LMLIR for distance-based classification. Our key intuition is to partition the metric space into influential regions and a background region, and then regulate the effectiveness of each local metric to be within the related influential regions. We learn multiple local metrics and influential regions to reduce the empirical hinge loss, and regularize the parameters on the basis of a resultant learning bound. Encouraging experimental results are obtained from various public and popular data sets

    Learning Local Metrics and Influential Regions for Classification

    Get PDF
    The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data. To address the problem of multimodality, it is desirable to learn local metrics. In this short paper, we define a new intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning algorithm called LMLIR for distance-based classification. Our key intuition is to partition the metric space into influential regions and a background region, and then regulate the effectiveness of each local metric to be within the related influential regions. We learn multiple local metrics and influential regions to reduce the empirical hinge loss, and regularize the parameters on the basis of a resultant learning bound. Encouraging experimental results are obtained from various public and popular data sets

    Subspace Representations and Learning for Visual Recognition

    Get PDF
    Pervasive and affordable sensor and storage technology enables the acquisition of an ever-rising amount of visual data. The ability to extract semantic information by interpreting, indexing and searching visual data is impacting domains such as surveillance, robotics, intelligence, human- computer interaction, navigation, healthcare, and several others. This further stimulates the investigation of automated extraction techniques that are more efficient, and robust against the many sources of noise affecting the already complex visual data, which is carrying the semantic information of interest. We address the problem by designing novel visual data representations, based on learning data subspace decompositions that are invariant against noise, while being informative for the task at hand. We use this guiding principle to tackle several visual recognition problems, including detection and recognition of human interactions from surveillance video, face recognition in unconstrained environments, and domain generalization for object recognition.;By interpreting visual data with a simple additive noise model, we consider the subspaces spanned by the model portion (model subspace) and the noise portion (variation subspace). We observe that decomposing the variation subspace against the model subspace gives rise to the so-called parity subspace. Decomposing the model subspace against the variation subspace instead gives rise to what we name invariant subspace. We extend the use of kernel techniques for the parity subspace. This enables modeling the highly non-linear temporal trajectories describing human behavior, and performing detection and recognition of human interactions. In addition, we introduce supervised low-rank matrix decomposition techniques for learning the invariant subspace for two other tasks. We learn invariant representations for face recognition from grossly corrupted images, and we learn object recognition classifiers that are invariant to the so-called domain bias.;Extensive experiments using the benchmark datasets publicly available for each of the three tasks, show that learning representations based on subspace decompositions invariant to the sources of noise lead to results comparable or better than the state-of-the-art

    Metric Learning with Lipschitz Continuous Functions

    Get PDF
    Classification is a fundamental problem in the field of statistical machine learning. In classification, issues of nonlinear separability and multimodality are frequently encountered even in relatively small data sets. Distance-based classifiers, such as the nearest neighbour (NN) classifier which classifies a new instance by computing distances between this instance and the training instances, have been found useful to deal with nonlinear separability and multimodality. However, the performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to study metric learning, which enables the algorithms to automatically learn a suitable metric from available data. In this thesis, I discuss the topic of metric learning with Lipschitz continuous functions. The classifiers are restricted to have certain Lipschitz continuous properties, so that the performance guarantee of classifiers, which could be described by probably approximately correct (PAC) learning bounds, would be obtained. In Chapter 2, I propose a framework in which the metric would be learned with the criterion of large margin ratio. Both inter-class margin and intra-class dispersion are considered in the criterion, so as to enhance the generalisation ability of classifiers. Some well-known metric learning algorithms can be shown as special cases of the proposed framework. In Chapter 3, I suggest that multiple local metrics would be learned to deal with multimodality problems. I define an intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning method for distance-based classification. The key intuition is to partition the metric space into influential regions and a background region, and then regulate the effectiveness of each local metric to be within the related influential regions. In Chapter 4, metric learning with instance extraction (MLIE) is discussed. A big drawback of the NN classifier is that it needs to store all training instances, hence it suffers from problems of storage and computation. Therefore, I propose an algorithm to extract a small number of useful instances, which would reduce the costs of storage as well as the computation costs during the test stage. Furthermore, the proposed instance extraction method could be understood as an elegant way to do local linear classification, i.e. simultaneously learn the positions of local areas and the linear classifiers inside the local areas. In Chapter 5, based on an algorithm-dependent PAC bound, another algorithm of MLIE is proposed. Besides the Lipschitz continuous requirement with respect to the parameter, the Lipschitz continuous requirement with respect to the gradient of parameter will also be considered. Therefore, smooth classifiers and smooth loss functions are proposed in this chapter. The classifiers proposed in Chapter 2 and Chapter 3 have bounded values of lip(h x) with a PAC bound, where lip(h x) denotes the Lipschitz constant of the function with respect to the input space X. The classifiers proposed in Chapter 4 enjoys the bounded value of lip(h ) with a tighter PAC bound, where lip(h ) denotes the Lipschitz constant of the function with respect to the input space . In Chapter 5, to consider the property of the optimisation algorithm simultaneously, an algorithm-dependent PAC bound based on Lipschitz smoothness is derived

    Large Margin Local Metric Learning

    No full text
    corecore