38 research outputs found

    Computing the first eigenpair for problems with variable exponents

    Full text link
    We compute the first eigenpair for variable exponent eigenvalue problems. We compare the homogeneous definition of first eigenvalue with previous nonhomogeneous notions in the literature. We highlight the symmetry breaking phenomen

    â„“1\ell_1-minimization method for link flow correction

    Full text link
    A computational method, based on â„“1\ell_1-minimization, is proposed for the problem of link flow correction, when the available traffic flow data on many links in a road network are inconsistent with respect to the flow conservation law. Without extra information, the problem is generally ill-posed when a large portion of the link sensors are unhealthy. It is possible, however, to correct the corrupted link flows \textit{accurately} with the proposed method under a recoverability condition if there are only a few bad sensors which are located at certain links. We analytically identify the links that are robust to miscounts and relate them to the geometric structure of the traffic network by introducing the recoverability concept and an algorithm for computing it. The recoverability condition for corrupted links is simply the associated recoverability being greater than 1. In a more realistic setting, besides the unhealthy link sensors, small measurement noises may be present at the other sensors. Under the same recoverability condition, our method guarantees to give an estimated traffic flow fairly close to the ground-truth data and leads to a bound for the correction error. Both synthetic and real-world examples are provided to demonstrate the effectiveness of the proposed method

    An Adaptive Total Variation Algorithm for Computing the Balanced Cut of a Graph

    Get PDF
    We propose an adaptive version of the total variation algorithm proposed in [3] for computing the balanced cut of a graph. The algorithm from [3] used a sequence of inner total variation minimizations to guarantee descent of the balanced cut energy as well as convergence of the algorithm. In practice the total variation minimization step is never solved exactly. Instead, an accuracy parameter is specified and the total variation minimization terminates once this level of accuracy is reached. The choice of this parameter can vastly impact both the computational time of the overall algorithm as well as the accuracy of the result. Moreover, since the total variation minimization step is not solved exactly, the algorithm is not guarantied to be monotonic. In the present work we introduce a new adaptive stopping condition for the total variation minimization that guarantees monotonicity. This results in an algorithm that is actually monotonic in practice and is also significantly faster than previous, non-adaptive algorithms

    Multiclass Total Variation Clustering

    Get PDF
    Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield unimpressive results for multiclass clustering tasks. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. The results greatly outperform previous total variation algorithms and compare well with state-of-the-art NMF approaches

    Convergence of a Steepest Descent Algorithm for Ratio Cut Clustering

    Get PDF
    Unsupervised clustering of scattered, noisy and high-dimensional data points is an important and difficult problem. Tight continuous relaxations of balanced cut problems have recently been shown to provide excellent clustering results. In this paper, we present an explicit-implicit gradient flow scheme for the relaxed ratio cut problem, and prove that the algorithm converges to a critical point of the energy. We also show the efficiency of the proposed algorithm on the two moons dataset

    Learning parametrised regularisation functions via quotient minimisation

    Get PDF
    We propose a novel strategy for the computation of adaptive regularisation functions. The general strategy consists of minimising the ratio of a parametrised regularisation function; the numerator contains the regulariser with a desirable training signal as its argument, whereas the denominator contains the same regulariser but with its argument being a training signal one wants to avoid. The rationale behind this is to adapt parametric regularisations to given training data that contain both wanted and unwanted outcomes. We discuss the numerical implementation of this minimisation problem for a specific parametrisation, and present preliminary numerical results which demonstrate that this approach is able to recover total variation as well as second-order total variation regularisation from suitable training data.MB and CBS acknowledge support from the EPSRC grant EP/M00483X/1 and the Leverhulme grant ’Breaking the non-convexity barrier’. GG acknowledges support from the Israel Science Foundation (grant No. 718/15) and by the Magnet program of the OCS, Israel Ministry of Economy, in the framework of Omek Consortium.This is the author accepted manuscript. The final version is available from Wiley via http://dx.doi.org/10.1002/pamm.20161045

    Quasi-Likelihood and/or Robust Estimation in High Dimensions

    Full text link
    We consider the theory for the high-dimensional generalized linear model with the Lasso. After a short review on theoretical results in literature, we present an extension of the oracle results to the case of quasi-likelihood loss. We prove bounds for the prediction error and â„“1\ell_1-error. The results are derived under fourth moment conditions on the error distribution. The case of robust loss is also given. We moreover show that under an irrepresentable condition, the â„“1\ell_1-penalized quasi-likelihood estimator has no false positives.Comment: Published in at http://dx.doi.org/10.1214/12-STS397 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Tight Continuous Relaxation of the Balanced kk-Cut Problem

    Full text link
    Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods. Existing methods for the computation of multiple clusters, corresponding to a balanced kk-cut of the graph, are either based on greedy techniques or heuristics which have weak connection to the original motivation of minimizing the normalized cut. In this paper we propose a new tight continuous relaxation for any balanced kk-cut problem and show that a related recently proposed relaxation is in most cases loose leading to poor performance in practice. For the optimization of our tight continuous relaxation we propose a new algorithm for the difficult sum-of-ratios minimization problem which achieves monotonic descent. Extensive comparisons show that our method outperforms all existing approaches for ratio cut and other balanced kk-cut criteria.Comment: Long version of paper accepted at NIPS 201
    corecore