28 research outputs found
Exploiting symmetries in SDP-relaxations for polynomial optimization
In this paper we study various approaches for exploiting symmetries in
polynomial optimization problems within the framework of semi definite
programming relaxations. Our special focus is on constrained problems
especially when the symmetric group is acting on the variables. In particular,
we investigate the concept of block decomposition within the framework of
constrained polynomial optimization problems, show how the degree principle for
the symmetric group can be computationally exploited and also propose some
methods to efficiently compute in the geometric quotient.Comment: (v3) Minor revision. To appear in Math. of Operations Researc
On the classification of Kahler-Ricci solitons on Gorenstein del Pezzo surfaces
We give a classification of all pairs (X,v) of Gorenstein del Pezzo surfaces
X and vector fields v which are K-stable in the sense of Berman-Nystrom and
therefore are expected to admit a Kahler-Ricci solition. Moreover, we provide
some new examples of Fano threefolds admitting a Kahler-Ricci soliton.Comment: 21 pages, ancillary files containing calculations in SageMath; minor
correction
Model counting for complex data structures
We extend recent approaches for calculating the probability of program behaviors, to allow model counting for complex data structures with numeric fields. We use symbolic execution with lazy initialization to compute the input structures leading to the occurrence of a target event, while keeping a symbolic representation of the constraints on the numeric data. Off-the-shelf model counting tools are used to count the solutions for numerical constraints and field bounds encoding data structure invariants are used to reduce the search space. The technique is implemented in the Symbolic PathFinder tool and evaluated on several complex data structures. Results show that the technique is much faster than an enumeration-based method that uses the Korat tool and also highlight the benefits of using the field bounds to speed up the analysis
Polyhedra Circuits and Their Applications
To better compute the volume and count the lattice points in geometric objects, we propose polyhedral circuits. Each polyhedral circuit characterizes a geometric region in Rd . They can be applied to represent a rich class of geometric objects, which include all polyhedra and the union of a finite number of polyhedron. They can be also used to approximate a large class of d-dimensional manifolds in Rd . Barvinok [3] developed polynomial time algorithms to compute the volume of a rational polyhedron, and to count the number of lattice points in a rational polyhedron in Rd with a fixed dimensional number d. Let d be a fixed dimensional number, TV(d,n) be polynomial time in n to compute the volume of a rational polyhedron, TL(d,n) be polynomial time in n to count the number of lattice points in a rational polyhedron, where n is the total number of linear inequalities from input polyhedra, and TI(d,n) be polynomial time in n to solve integer linear programming problem with n be the total number of input linear inequalities. We develop algorithms to count the number of lattice points in geometric region determined by a polyhedral circuit in O(nd⋅rd(n)⋅TV(d,n)) time and to compute the volume of geometric region determined by a polyhedral circuit in O(n⋅rd(n)⋅TI(d,n)+rd(n)TL(d,n)) time, where rd(n) is the maximum number of atomic regions that n hyperplanes partition Rd . The applications to continuous polyhedra maximum coverage problem, polyhedra maximum lattice coverage problem, polyhedra (1−β) -lattice set cover problem, and (1−β) -continuous polyhedra set cover problem are discussed. We also show the NP-hardness of the geometric version of maximum coverage problem and set cover problem when each set is represented as union of polyhedra
No imminent quantum supremacy by boson sampling
It is predicted that quantum computers will dramatically outperform their
conventional counterparts. However, large-scale universal quantum computers are
yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to
the platform of photons in linear optics, which has sparked interest as a rapid
way to demonstrate this quantum supremacy. Photon statistics are governed by
intractable matrix functions known as permanents, which suggests that sampling
from the distribution obtained by injecting photons into a linear-optical
network could be solved more quickly by a photonic experiment than by a
classical computer. The contrast between the apparently awesome challenge faced
by any classical sampling algorithm and the apparently near-term experimental
resources required for a large boson sampling experiment has raised
expectations that quantum supremacy by boson sampling is on the horizon. Here
we present classical boson sampling algorithms and theoretical analyses of
prospects for scaling boson sampling experiments, showing that near-term
quantum supremacy via boson sampling is unlikely. While the largest boson
sampling experiments reported so far are with 5 photons, our classical
algorithm, based on Metropolised independence sampling (MIS), allowed the boson
sampling problem to be solved for 30 photons with standard computing hardware.
We argue that the impact of experimental photon losses means that demonstrating
quantum supremacy by boson sampling would require a step change in technology.Comment: 25 pages, 9 figures. Comments welcom
Mixed-radix Naccache–Stern encryption
In this work, we explore a combinatorial optimization problem stemming from the Naccache–Stern cryptosystem. We show that solving this problem results in bandwidth improvements, and suggest a polynomial-time approximation algorithm to find an optimal solution. Our work suggests that using optimal radix encoding results in an asymptotic 50% increase in bandwidth
Efficient Algorithms to Test Digital Convexity
International audienceA set S ⊂ Z^d is digital convex if conv(S) ∩ Z^d = S, where conv(S) denotes the convex hull of S. In this paper, we consider the algorithmic problem of testing whether a given set S of n lattice points is digital convex. Although convex hull computation requires Ω(n log n) time even for dimension d = 2, we provide an algorithm for testing the digital convexity of S ⊂ Z^2 in O(n + h log r) time, where h is the number of edges of the convex hull and r is the diameter of S. This main result is obtained by proving that if S is digital convex, then the well-known quickhull algorithm computes the convex hull of S in linear time. In fixed dimension d, we present the first polynomial algorithm to test digital convexity, as well as a simpler and more practical algorithm whose running time may not be polynomial in n for certain inputs