34,014 research outputs found
Bounds for the minimum diameter of integral point sets
Geometrical objects with integral sides have attracted mathematicians for
ages. For example, the problem to prove or to disprove the existence of a
perfect box, that is, a rectangular parallelepiped with all edges, face
diagonals and space diagonals of integer lengths, remains open. More generally
an integral point set is a set of points in the
-dimensional Euclidean space with pairwise integral distances
where the largest occurring distance is called its diameter. From the
combinatorial point of view there is a natural interest in the determination of
the smallest possible diameter for given parameters and . We
give some new upper bounds for the minimum diameter and some exact
values.Comment: 8 pages, 7 figures; typos correcte
Enumeration of integral tetrahedra
We determine the numbers of integral tetrahedra with diameter up to
isomorphism for all via computer enumeration. Therefore we give an
algorithm that enumerates the integral tetrahedra with diameter at most in
time and an algorithm that can check the canonicity of a given
integral tetrahedron with at most 6 integer comparisons. For the number of
isomorphism classes of integral matrices with diameter
fulfilling the triangle inequalities we derive an exact formula.Comment: 10 pages, 1 figur
The Shape of the Level Sets of the First Eigenfunction of a Class of Two Dimensional Schr\"odinger Operators
We study the first Dirichlet eigenfunction of a class of Schr\"odinger
operators with a convex potential V on a domain . We find two length
scales and , and an orientation of the domain , which
determine the shape of the level sets of the eigenfunction. As an intermediate
step, we also establish bounds on the first eigenvalue in terms of the first
eigenvalue of an associated ordinary differential operator.Comment: 56 pages, 3 figure
A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity
We present a novel notion of complexity that interpolates between and
generalizes some classic existing complexity notions in learning theory: for
estimators like empirical risk minimization (ERM) with arbitrary bounded
losses, it is upper bounded in terms of data-independent Rademacher complexity;
for generalized Bayesian estimators, it is upper bounded by the data-dependent
information complexity (also known as stochastic or PAC-Bayesian,
complexity. For
(penalized) ERM, the new complexity reduces to (generalized) normalized maximum
likelihood (NML) complexity, i.e. a minimax log-loss individual-sequence
regret. Our first main result bounds excess risk in terms of the new
complexity. Our second main result links the new complexity via Rademacher
complexity to entropy, thereby generalizing earlier results of Opper,
Haussler, Lugosi, and Cesa-Bianchi who did the log-loss case with .
Together, these results recover optimal bounds for VC- and large (polynomial
entropy) classes, replacing localized Rademacher complexity by a simpler
analysis which almost completely separates the two aspects that determine the
achievable rates: 'easiness' (Bernstein) conditions and model complexity.Comment: 38 page
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
- …