31 research outputs found

    The Metric Nearness Problem

    Get PDF
    Metric nearness refers to the problem of optimally restoring metric properties to distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric data can be important in various settings, for example, in clustering, classification, metric-based indexing, query processing, and graph theoretic approximation algorithms. This paper formulates and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a ā€œnearestā€ set of distances that satisfy the properties of a metricā€”principally the triangle inequality. For solving this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative projection method. An intriguing aspect of the metric nearness problem is that a special case turns out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and develops a new algorithm for the latter problem using a primal-dual method. Applications to graph clustering are provided as an illustration. We include experiments that demonstrate the computational superiority of triangle fixing over general purpose convex programming software. Finally, we conclude by suggesting various useful extensions and generalizations to metric nearness

    Negative Confidence-Aware Weakly Supervised Binary Classification for Effective Review Helpfulness Classification

    Get PDF
    The incompleteness of positive labels and the presence of many unlabelled instances are common problems in binary classification applications such as in review helpfulness classification. Various studies from the classification literature consider all unlabelled instances as negative examples. However, a classification model that learns to classify binary instances with incomplete positive labels while assuming all unlabelled data to be negative examples will often generate a biased classifier. In this work, we propose a novel Negative Confidence-aware Weakly Supervised approach (NCWS), which customises a binary classification loss function by discriminating the unlabelled examples with different negative confidences during the classifier's training. NCWS allows to effectively, unbiasedly identify and separate positive and negative instances after its integration into various binary classifiers from the literature, including SVM, CNN and BERT-based classifiers. We use the review helpfulness classification as a test case for examining the effectiveness of our NCWS approach. We thoroughly evaluate NCWS by using three different datasets, namely one from Yelp (venue reviews), and two from Amazon (Kindle and Electronics reviews). Our results show that NCWS outperforms strong baselines from the literature including an existing SVM-based approach (i.e. SVM-P), the positive and unlabelled learning-based approach (i.e. C-PU) and the positive confidence-based approach (i.e. P-conf) in addressing the classifier's bias problem. Moreover, we further examine the effectiveness of NCWS by using its classified helpful reviews in a state-of-the-art review-based venue recommendation model (i.e. DeepCoNN) and demonstrate the benefits of using NCWS in enhancing venue recommendation effectiveness in comparison to the baselines

    Modeling Data using Directional Distributions

    No full text
    Traditionally multi-variate normal distributions have been the staple of data modeling in most domains. For some domains, the model they provide is either inadequate or incorrect because of the disregard for the directional components of the data. We present a generative model for data that is suitable for modeling directional data (as can arise in text and gene expression clustering). We use mixtures of von Mises-Fisher distributions to model our data since the von Mises-Fisher distribution is the natural distribution for directional data. We derive an Expectation Maximization (EM) algorithm to find the maximum likelihood estimates for the parameters of our mixture model, and provide various experimental results to evaluate the "correctness" of our formulation. In this paper we also provide some of the mathematical background necessary to carry out all the derivations and to gain insight for an implementation.

    Nonnegative matrix approximation: algorithms and applications

    No full text
    Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a low-dimensional approximation. NNMA has been used in a multitude of applications, though without commensurate theoretical development. In this report we describe generic methods for minimizing generalized divergences between the input and its low rank approximant. Some of our general methods are even extensible to arbitrary convex penalties. Our methods yield efficient multiplicative iterative schemes for solving the proposed problems. We also consider interesting extensions such as the use of penalty functions, non-linear relationships via ā€œlink ā€ functions, weighted errors, and multi-factor approximations. We present some experiments as an illustration of our algorithms. For completeness, the report also includes a brief literature survey of the various algorithms and the applications of NNMA. Keywords: Nonnegative matrix factorization, weighted approximation, Bregman divergence, multiplicativ

    Generalized nonnegative matrix approximations with Bregman divergences

    No full text
    Nonnegative matrix approximation (NNMA) is a recent technique for dimensionality reduction and data analysis that yields a parts based, sparse nonnegative representation for nonnegative input data. NNMA has found a wide variety of applications, including text analysis, document clustering, face/image recognition, language modeling, speech processing and many others. Despite these numerous applications, the algorithmic development for computing the NNMA factors has been relatively deficient. This paper makes algorithmic progress by modeling and solving (using multiplicative updates) new generalized NNMA problems that minimize Bregman divergences between the input matrix and its lowrank approximation. The multiplicative update formulae in the pioneering work by Lee and Seung [11] arise as a special case of our algorithms. In addition, the paper shows how to use penalty functions for incorporating constraints other than nonnegativity into the problem. Further, some interesting extensions to the use of ā€œlink ā€ functions for modeling nonlinear relationships are also discussed.

    Abstract

    No full text
    Numerous problems in machine learning, data mining, databases and statistics involve pairwise dissimilarities amongst a set of objects. These dissimilarities can be represented as edge weights in a complete graph with the objects as the vertices. Often one desires the dissimilarities to satisfy the properties of a metricā€”especially the triangle inequality. Applications where metric data is important include clustering, metric based indexing, classification, query processing, and approximation algorithms. In this paper we present algorithms for solving the Metric Nearness Problem: Given a non-metric graph (dissimilarity matrix), find the ā€œnearestā€ metric graph whose edge weights satisfy the triangle inequalities. This paper presents algorithms that exploit the innate structure of the problem for solving it efficiently for nearness in ā„“p norms. Empirically, the algorithms have time and storage requirements linear in the number of triangle constraints. The methods are easily parallelizable enabling the solution of large problems.

    The Metric Nearness Problems with Applications

    No full text
    Many practical applications in machine learning require pairwise distances among a set of objects. It is often desirable that these distance measurements satisfy the properties of a metric, especially the triangle inequality. Applications that could benefit from the metric property include data clustering and metric-based indexing of databases. In this paper, we present the metric nearness problem: Given a dissimilarity matrix, find the ā€œnearest ā€ matrix of distances that satisfy the triangle inequalities. A weight matrix in the formulation captures the confidence in individual dissimilarity measures, including the case of altogether missing distances. For an important class of nearness measures, the problem can be attacked with convex optimization techniques. A pleasing aspect of this formulation is that we can compute globally optimal solutions. Experiments on some sample dissimilarity matrices are presented, including some from biology.

    Abstract

    No full text
    Various problems in machine learning, databases, and statistics involve pairwise distances among a set of objects. It is often desirable for these distances to satisfy the properties of a metric, especially the triangle inequality. Applications where metric data is useful include clustering, classification, metric-based indexing, and approximation algorithms for various graph problems. This paper presents the Metric Nearness Problem: Given a dissimilarity matrix, find the ā€œnearest ā€ matrix of distances that satisfy the triangle inequalities. For ā„“p nearness measures, this paper develops efficient triangle fixing algorithms that compute globally optimal solutions by exploiting the inherent structure of the problem. Empirically, the algorithms have time and storage costs that are linear in the number of triangle constraints. The methods can also be easily parallelized for additional speed.

    A New Projected Quasi-Newton Approach for the Non-negative Least Squares Problem

    No full text
    Constrained least squares estimation lies at the heart of many applications in fields as diverse as statistics, psychometrics, signal processing, or even machine learning. Nonnegativity requirements on the model variables are amongst the simplest constraints that arise naturally, and the corresponding least-squares problem is called Nonnegative Least Squares or NNLS. In this paper we present a new, efficient, and scalable Quasi-Newton-type method for solving the NNLS problem, improving on several previous approaches and leading to a superlinearly convergent method. We show experimental results comparing our method to well-known methods for solving the NNLS problem. Our method significantly outperforms other methods, especially as the problem size becomes larger.
    corecore