111 research outputs found
Rigid abelian groups and the probabilistic method
The construction of torsion-free abelian groups with prescribed endomorphism
rings starting with Corner's seminal work is a well-studied subject in the
theory of abelian groups. Usually these construction work by adding elements
from a (topological) completion in order to get rid of (kill) unwanted
homomorphisms. The critical part is to actually prove that every unwanted
homomorphism can be killed by adding a suitable element. We will demonstrate
that some of those constructions can be significantly simplified by choosing
the elements at random. As a result, the endomorphism ring will be almost
surely prescribed, i.e., with probability one.Comment: 12 pages, submitted to the special volume of Contemporary Mathematics
for the proceedings of the conference Group and Model Theory, 201
A polyhedral approach to computing border bases
Border bases can be considered to be the natural extension of Gr\"obner bases
that have several advantages. Unfortunately, to date the classical border basis
algorithm relies on (degree-compatible) term orderings and implicitly on
reduced Gr\"obner bases. We adapt the classical border basis algorithm to allow
for calculating border bases for arbitrary degree-compatible order ideals,
which is \emph{independent} from term orderings. Moreover, the algorithm also
supports calculating degree-compatible order ideals with \emph{preference} on
contained elements, even though finding a preferred order ideal is NP-hard.
Effectively we retain degree-compatibility only to successively extend our
computation degree-by-degree. The adaptation is based on our polyhedral
characterization: order ideals that support a border basis correspond
one-to-one to integral points of the order ideal polytope. This establishes a
crucial connection between the ideal and the combinatorial structure of the
associated factor spaces
The matching polytope does not admit fully-polynomial size relaxation schemes
The groundbreaking work of Rothvo{\ss} [arxiv:1311.2369] established that
every linear program expressing the matching polytope has an exponential number
of inequalities (formally, the matching polytope has exponential extension
complexity). We generalize this result by deriving strong bounds on the
polyhedral inapproximability of the matching polytope: for fixed , every polyhedral -approximation
requires an exponential number of inequalities, where is the number of
vertices. This is sharp given the well-known -approximation of size
provided by the odd-sets of size up to
. Thus matching is the first problem in , whose natural
linear encoding does not admit a fully polynomial-size relaxation scheme (the
polyhedral equivalent of an FPTAS), which provides a sharp separation from the
polynomial-size relaxation scheme obtained e.g., via constant-sized odd-sets
mentioned above.
Our approach reuses ideas from Rothvo{\ss} [arxiv:1311.2369], however the
main lower bounding technique is different. While the original proof is based
on the hyperplane separation bound (also called the rectangle corruption
bound), we employ the information-theoretic notion of common information as
introduced in Braun and Pokutta [http://eccc.hpi-web.de/report/2013/056/],
which allows to analyze perturbations of slack matrices. It turns out that the
high extension complexity for the matching polytope stem from the same source
of hardness as for the correlation polytope: a direct sum structure.Comment: 21 pages, 3 figure
The Frank-Wolfe algorithm: a short introduction
In this paper we provide an introduction to the Frank-Wolfe algorithm, a
method for smooth convex optimization in the presence of (relatively)
complicated constraints. We will present the algorithm, introduce key concepts,
and establish important baseline results, such as e.g., primal and dual
convergence. We will also discuss some of its properties, present a new
adaptive step-size strategy as well as applications.Comment: Introductory article for the Jahresbericht der Deutschen Mathematiker
Vereinigun
- …