180 research outputs found

    Comparative study of Marlowe\u27s and Chapman\u27s Hero and Leander

    Get PDF

    Bi-stochastic kernels via asymmetric affinity functions

    Full text link
    In this short letter we present the construction of a bi-stochastic kernel p for an arbitrary data set X that is derived from an asymmetric affinity function {\alpha}. The affinity function {\alpha} measures the similarity between points in X and some reference set Y. Unlike other methods that construct bi-stochastic kernels via some convergent iteration process or through solving an optimization problem, the construction presented here is quite simple. Furthermore, it can be viewed through the lens of out of sample extensions, making it useful for massive data sets.Comment: 5 pages. v2: Expanded upon the first paragraph of subsection 2.1. v3: Minor changes and edits. v4: Edited comments and added DO

    A Sharpened Condition for Strict Log-Convexity of the Spectral Radius via the Bipartite Graph

    Full text link
    Friedland (1981) showed that for a nonnegative square matrix A, the spectral radius r(e^D A) is a log-convex functional over the real diagonal matrices D. He showed that for fully indecomposable A, log r(e^D A) is strictly convex over D_1, D_2 if and only if D_1-D_2 != c I for any c \in R. Here the condition of full indecomposability is shown to be replaceable by the weaker condition that A and A'A be irreducible, which is the sharpest possible replacement condition. Irreducibility of both A and A'A is shown to be equivalent to irreducibility of A^2 and A'A, which is the condition for a number of strict inequalities on the spectral radius found in Cohen, Friedland, Kato, and Kelly (1982). Such `two-fold irreducibility' is equivalent to joint irreducibility of A, A^2, A'A, and AA', or in combinatorial terms, equivalent to the directed graph of A being strongly connected and the simple bipartite graph of A being connected. Additional ancillary results are presented.Comment: 20 pages; v. 2: expanded exposition

    Gradient methods for problems with inexact model of the objective

    Get PDF
    We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [19] and relative smoothness condition [43]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [49]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments

    Auto-labelling of Markers in Optical Motion Capture by Permutation Learning

    Full text link
    Optical marker-based motion capture is a vital tool in applications such as motion and behavioural analysis, animation, and biomechanics. Labelling, that is, assigning optical markers to the pre-defined positions on the body is a time consuming and labour intensive postprocessing part of current motion capture pipelines. The problem can be considered as a ranking process in which markers shuffled by an unknown permutation matrix are sorted to recover the correct order. In this paper, we present a framework for automatic marker labelling which first estimates a permutation matrix for each individual frame using a differentiable permutation learning model and then utilizes temporal consistency to identify and correct remaining labelling errors. Experiments conducted on the test data show the effectiveness of our framework

    Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

    Full text link
    Building on recent progress at the intersection of combinatorial optimization and deep learning, we propose an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers. Using the presence of heavily optimized combinatorial solvers together with some improvements in architecture design, we advance state-of-the-art on deep graph matching benchmarks for keypoint correspondence. In addition, we highlight the conceptual advantages of incorporating solvers into deep learning architectures, such as the possibility of post-processing with a strong multi-graph matching solver or the indifference to changes in the training setting. Finally, we propose two new challenging experimental setups. The code is available at https://github.com/martius-lab/blackbox-deep-graph-matchingComment: ECCV 2020 conference pape
    corecore