413 research outputs found

    Zero Shot Learning with the Isoperimetric Loss

    Full text link
    We introduce the isoperimetric loss as a regularization criterion for learning the map from a visual representation to a semantic embedding, to be used to transfer knowledge to unknown classes in a zero-shot learning setting. We use a pre-trained deep neural network model as a visual representation of image data, a Word2Vec embedding of class labels, and linear maps between the visual and semantic embedding spaces. However, the spaces themselves are not linear, and we postulate the sample embedding to be populated by noisy samples near otherwise smooth manifolds. We exploit the graph structure defined by the sample points to regularize the estimates of the manifolds by inferring the graph connectivity using a generalization of the isoperimetric inequalities from Riemannian geometry to graphs. Surprisingly, this regularization alone, paired with the simplest baseline model, outperforms the state-of-the-art among fully automated methods in zero-shot learning benchmarks such as AwA and CUB. This improvement is achieved solely by learning the structure of the underlying spaces by imposing regularity.Comment: Accepted to AAAI-2

    On the Smallest Eigenvalue of Grounded Laplacian Matrices

    Full text link
    We provide upper and lower bounds on the smallest eigenvalue of grounded Laplacian matrices (which are matrices obtained by removing certain rows and columns of the Laplacian matrix of a given graph). The gap between the upper and lower bounds depends on the ratio of the smallest and largest components of the eigenvector corresponding to the smallest eigenvalue of the grounded Laplacian. We provide a graph-theoretic bound on this ratio, and subsequently obtain a tight characterization of the smallest eigenvalue for certain classes of graphs. Specifically, for Erdos-Renyi random graphs, we show that when a (sufficiently small) set SS of rows and columns is removed from the Laplacian, and the probability pp of adding an edge is sufficiently large, the smallest eigenvalue of the grounded Laplacian asymptotically almost surely approaches ∣S∣p|S|p. We also show that for random dd-regular graphs with a single row and column removed, the smallest eigenvalue is Θ(dn)\Theta(\frac{d}{n}). Our bounds have applications to the study of the convergence rate in continuous-time and discrete-time consensus dynamics with stubborn or leader nodes

    Iterative solution of spatial network models by subspace decomposition

    Full text link
    We present and analyze a preconditioned conjugate gradient method (PCG) for solving spatial network problems. Primarily, we consider diffusion and structural mechanics simulations for fiber based materials, but the methodology can be applied to a wide range of models, fulfilling a set of abstract assumptions. The proposed method builds on a classical subspace decomposition into a coarse subspace, realized as the restriction of a finite element space to the nodes of the spatial network, and localized subspaces with support on mesh stars. The main contribution of this work is the convergence analysis of the proposed method. The analysis translates results from finite element theory, including interpolation bounds, to the spatial network setting. A convergence rate of the PCG algorithm, only depending on global bounds of the operator and homogeneity, connectivity and locality constants of the network, is established. The theoretical results are confirmed by several numerical experiments.Comment: Journal article draft, not peer-reviewe
    • …
    corecore