1,821 research outputs found

    Probabilistic lower bounds on maximal determinants of binary matrices

    Full text link
    Let D(n){\mathcal D}(n) be the maximal determinant for n×nn \times n {±1}\{\pm 1\}-matrices, and R(n)=D(n)/nn/2\mathcal R(n) = {\mathcal D}(n)/n^{n/2} be the ratio of D(n){\mathcal D}(n) to the Hadamard upper bound. Using the probabilistic method, we prove new lower bounds on D(n){\mathcal D}(n) and R(n)\mathcal R(n) in terms of d=n−hd = n-h, where hh is the order of a Hadamard matrix and hh is maximal subject to h≤nh \le n. For example, R(n)>(πe/2)−d/2\mathcal R(n) > (\pi e/2)^{-d/2} if 1≤d≤31 \le d \le 3, and R(n)>(πe/2)−d/2(1−d2(π/(2h))1/2)\mathcal R(n) > (\pi e/2)^{-d/2}(1 - d^2(\pi/(2h))^{1/2}) if d>3d > 3. By a recent result of Livinskyi, d2/h1/2→0d^2/h^{1/2} \to 0 as n→∞n \to \infty, so the second bound is close to (πe/2)−d/2(\pi e/2)^{-d/2} for large nn. Previous lower bounds tended to zero as n→∞n \to \infty with dd fixed, except in the cases d∈{0,1}d \in \{0,1\}. For d≥2d \ge 2, our bounds are better for all sufficiently large nn. If the Hadamard conjecture is true, then d≤3d \le 3, so the first bound above shows that R(n)\mathcal R(n) is bounded below by a positive constant (πe/2)−3/2>0.1133(\pi e/2)^{-3/2} > 0.1133.Comment: 17 pages, 2 tables, 24 references. Shorter version of arXiv:1402.6817v4. Typos corrected in v2 and v3, new Lemma 7 in v4, updated references in v5, added Remark 2.8 and a reference in v6, updated references in v

    On minors of maximal determinant matrices

    Full text link
    By an old result of Cohn (1965), a Hadamard matrix of order n has no proper Hadamard submatrices of order m > n/2. We generalise this result to maximal determinant submatrices of Hadamard matrices, and show that an interval of length asymptotically equal to n/2 is excluded from the allowable orders. We make a conjecture regarding a lower bound for sums of squares of minors of maximal determinant matrices, and give evidence in support of the conjecture. We give tables of the values taken by the minors of all maximal determinant matrices of orders up to and including 21 and make some observations on the data. Finally, we describe the algorithms that were used to compute the tables.Comment: 35 pages, 43 tables, added reference to Cohn in v

    General lower bounds on maximal determinants of binary matrices

    Get PDF
    We give general lower bounds on the maximal determinant of n×n {+1,-1}-matrices, both with and without the assumption of the Hadamard conjecture. Our bounds improve on earlier results of de Launey and Levin (2010) and, for certain congruence classes of

    Processing Succinct Matrices and Vectors

    Full text link
    We study the complexity of algorithmic problems for matrices that are represented by multi-terminal decision diagrams (MTDD). These are a variant of ordered decision diagrams, where the terminal nodes are labeled with arbitrary elements of a semiring (instead of 0 and 1). A simple example shows that the product of two MTDD-represented matrices cannot be represented by an MTDD of polynomial size. To overcome this deficiency, we extended MTDDs to MTDD_+ by allowing componentwise symbolic addition of variables (of the same dimension) in rules. It is shown that accessing an entry, equality checking, matrix multiplication, and other basic matrix operations can be solved in polynomial time for MTDD_+-represented matrices. On the other hand, testing whether the determinant of a MTDD-represented matrix vanishes PSPACE$-complete, and the same problem is NP-complete for MTDD_+-represented diagonal matrices. Computing a specific entry in a product of MTDD-represented matrices is #P-complete.Comment: An extended abstract of this paper will appear in the Proceedings of CSR 201

    An extensive English language bibliography on graph theory and its applications, supplement 1

    Get PDF
    Graph theory and its applications - bibliography, supplement

    Bolt: Accelerated Data Mining with Fast Vector Compression

    Full text link
    Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and up to 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.Comment: Research track paper at KDD 201
    • …
    corecore