1,821 research outputs found
Probabilistic lower bounds on maximal determinants of binary matrices
Let be the maximal determinant for -matrices, and be the ratio of
to the Hadamard upper bound. Using the probabilistic method,
we prove new lower bounds on and in terms of
, where is the order of a Hadamard matrix and is maximal
subject to . For example, if , and if . By a recent result of Livinskyi, as ,
so the second bound is close to for large . Previous
lower bounds tended to zero as with fixed, except in the
cases . For , our bounds are better for all
sufficiently large . If the Hadamard conjecture is true, then , so
the first bound above shows that is bounded below by a positive
constant .Comment: 17 pages, 2 tables, 24 references. Shorter version of
arXiv:1402.6817v4. Typos corrected in v2 and v3, new Lemma 7 in v4, updated
references in v5, added Remark 2.8 and a reference in v6, updated references
in v
On minors of maximal determinant matrices
By an old result of Cohn (1965), a Hadamard matrix of order n has no proper
Hadamard submatrices of order m > n/2. We generalise this result to maximal
determinant submatrices of Hadamard matrices, and show that an interval of
length asymptotically equal to n/2 is excluded from the allowable orders. We
make a conjecture regarding a lower bound for sums of squares of minors of
maximal determinant matrices, and give evidence in support of the conjecture.
We give tables of the values taken by the minors of all maximal determinant
matrices of orders up to and including 21 and make some observations on the
data. Finally, we describe the algorithms that were used to compute the tables.Comment: 35 pages, 43 tables, added reference to Cohn in v
General lower bounds on maximal determinants of binary matrices
We give general lower bounds on the maximal determinant of n×n {+1,-1}-matrices, both with and without the assumption of the Hadamard conjecture. Our bounds improve on earlier results of de Launey and Levin (2010) and, for certain congruence classes of
Processing Succinct Matrices and Vectors
We study the complexity of algorithmic problems for matrices that are
represented by multi-terminal decision diagrams (MTDD). These are a variant of
ordered decision diagrams, where the terminal nodes are labeled with arbitrary
elements of a semiring (instead of 0 and 1). A simple example shows that the
product of two MTDD-represented matrices cannot be represented by an MTDD of
polynomial size. To overcome this deficiency, we extended MTDDs to MTDD_+ by
allowing componentwise symbolic addition of variables (of the same dimension)
in rules. It is shown that accessing an entry, equality checking, matrix
multiplication, and other basic matrix operations can be solved in polynomial
time for MTDD_+-represented matrices. On the other hand, testing whether the
determinant of a MTDD-represented matrix vanishes PSPACE$-complete, and the
same problem is NP-complete for MTDD_+-represented diagonal matrices. Computing
a specific entry in a product of MTDD-represented matrices is #P-complete.Comment: An extended abstract of this paper will appear in the Proceedings of
CSR 201
An extensive English language bibliography on graph theory and its applications, supplement 1
Graph theory and its applications - bibliography, supplement
Bolt: Accelerated Data Mining with Fast Vector Compression
Vectors of data are at the heart of machine learning and data mining.
Recently, vector quantization methods have shown great promise in reducing both
the time and space costs of operating on vectors. We introduce a vector
quantization algorithm that can compress vectors over 12x faster than existing
techniques while also accelerating approximate vector operations such as
distance and dot product computations by up to 10x. Because it can encode over
2GB of vectors per second, it makes vector quantization cheap enough to employ
in many more circumstances. For example, using our technique to compute
approximate dot products in a nested loop can multiply matrices faster than a
state-of-the-art BLAS implementation, even when our algorithm must first
compress the matrices.
In addition to showing the above speedups, we demonstrate that our approach
can accelerate nearest neighbor search and maximum inner product search by over
100x compared to floating point operations and up to 10x compared to other
vector quantization methods. Our approximate Euclidean distance and dot product
computations are not only faster than those of related algorithms with slower
encodings, but also faster than Hamming distance computations, which have
direct hardware support on the tested platforms. We also assess the errors of
our algorithm's approximate distances and dot products, and find that it is
competitive with existing, slower vector quantization algorithms.Comment: Research track paper at KDD 201
- …