680,491 research outputs found

    Min-Rank Conjecture for Log-Depth Circuits

    Get PDF
    A completion of an m-by-n matrix A with entries in {0,1,*} is obtained by setting all *-entries to constants 0 or 1. A system of semi-linear equations over GF(2) has the form Mx=f(x), where M is a completion of A and f:{0,1}^n --> {0,1}^m is an operator, the i-th coordinate of which can only depend on variables corresponding to *-entries in the i-th row of A. We conjecture that no such system can have more than 2^{n-c\cdot mr(A)} solutions, where c>0 is an absolute constant and mr(A) is the smallest rank over GF(2) of a completion of A. The conjecture is related to an old problem of proving super-linear lower bounds on the size of log-depth boolean circuits computing linear operators x --> Mx. The conjecture is also a generalization of a classical question about how much larger can non-linear codes be than linear ones. We prove some special cases of the conjecture and establish some structural properties of solution sets.Comment: 22 pages, to appear in: J. Comput.Syst.Sci

    A functional-analytic theory of vertex (operator) algebras, I

    Full text link
    This paper is the first in a series of papers developing a functional-analytic theory of vertex (operator) algebras and their representations. For an arbitrary Z-graded finitely-generated vertex algebra (V, Y, 1) satisfying the standard grading-restriction axioms, a locally convex topological completion H of V is constructed. By the geometric interpretation of vertex (operator) algebras, there is a canonical linear map from the tensor product of V and V to the algebraic completion of V realizing linearly the conformal equivalence class of a genus-zero Riemann surface with analytically parametrized boundary obtained by deleting two ordered disjoint disks from the unit disk and by giving the obvious parametrizations to the boundary components. We extend such a linear map to a linear map from the completed tensor product of H and H to H, and prove the continuity of the extension. For any finitely-generated C-graded V-module (W, Y_W) satisfying the standard grading-restriction axioms, the same method also gives a topological completion H^W of W and gives the continuous extensions from the completed tensor product of H and H^W to H^W of the linear maps from the tensor product of V and W to the algenbraic completion of W realizing linearly the above conformal equivalence classes of the genus-zero Riemann surfaces with analytically parametrized boundaries.Comment: LaTeX file. 31 pages, 1 figur

    Fast Methods for Recovering Sparse Parameters in Linear Low Rank Models

    Full text link
    In this paper, we investigate the recovery of a sparse weight vector (parameters vector) from a set of noisy linear combinations. However, only partial information about the matrix representing the linear combinations is available. Assuming a low-rank structure for the matrix, one natural solution would be to first apply a matrix completion on the data, and then to solve the resulting compressed sensing problem. In big data applications such as massive MIMO and medical data, the matrix completion step imposes a huge computational burden. Here, we propose to reduce the computational cost of the completion task by ignoring the columns corresponding to zero elements in the sparse vector. To this end, we employ a technique to initially approximate the support of the sparse vector. We further propose to unify the partial matrix completion and sparse vector recovery into an augmented four-step problem. Simulation results reveal that the augmented approach achieves the best performance, while both proposed methods outperform the natural two-step technique with substantially less computational requirements

    Image tag completion by local learning

    Full text link
    The problem of tag completion is to learn the missing tags of an image. In this paper, we propose to learn a tag scoring vector for each image by local linear learning. A local linear function is used in the neighborhood of each image to predict the tag scoring vectors of its neighboring images. We construct a unified objective function for the learning of both tag scoring vectors and local linear function parame- ters. In the objective, we impose the learned tag scoring vectors to be consistent with the known associations to the tags of each image, and also minimize the prediction error of each local linear function, while reducing the complexity of each local function. The objective function is optimized by an alternate optimization strategy and gradient descent methods in an iterative algorithm. We compare the proposed algorithm against different state-of-the-art tag completion methods, and the results show its advantages
    corecore