27,364 research outputs found
Algebra in Computational Complexity
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called "chasm at depth 4" suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model, and these are tied to central questions regarding the power of randomness in computation. Representation theory has emerged as an important tool in three separate lines of work: the "Geometric Complexity Theory" approach to P vs. NP and circuit lower bounds, the effort to resolve the complexity of matrix multiplication, and a framework for constructing locally testable codes. Coding theory has seen several algebraic innovations in recent years, including multiplicity codes, and new lower bounds. This seminar brought together researchers who are using a diverse array of algebraic methods in a variety of settings. It plays an important role in educating a diverse community about the latest new techniques, spurring further progress
Algebraic Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test
are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques
Algebraic Approaches to State Complexity of Regular Operations
The state complexity of operations on regular languages is an active area of research in
theoretical computer science. Through connections with algebra, particularly the theory
of semigroups and monoids, many problems in this area can be simplified or completely
reduced to combinatorial problems. We describe various algebraic techniques for attacking
state complexity problems. We present a general method for constructing witness languages
for operations -- languages that attain the worst-case state complexity when used as the
argument(s) of the operation. Our construction is based on full transformation monoids,
which contain all functions from a finite set into itself. When a witness for an operation is
known, determining the state complexity essentially becomes a counting problem.
These counting problems, however, are not necessarily easy, and the witness languages
produced by this method are not ideal in the sense that they have extremely large alphabets.
We thus investigate some commonly used operations in detail, and look for algebraic
techniques to simplify the combinatorial side of state complexity problems and to simplify
the search for small-alphabet witnesses. For boolean operations (e.g., union, intersection,
difference) we show that these combinatorial problems can be solved easily in special cases
by studying the subgroup of permutations in the syntactic monoid of a witness candidate.
If the subgroup of permutations is known to have some strong transitivity property, such as
primitivity or 2-transitivity, we can draw conclusions about the worst-case state complexity
when this language is used in a boolean operation. For the operations of concatenation
and Kleene star (an iterated version of concatenation), we describe a “construction set”
method to simplify state complexity lower-bound proofs, and determine some algebraic
conditions under which this method can be applied. For the reversal operation, we show
that the state complexity of the reverse of a language is closely related to the syntactic
monoid of the language, and use this fact to investigate a generalized version of the reversal
state complexity problem.
After describing our techniques, we demonstrate them by applying them to some classical
state complexity problems. We obtain complex generalizations of the classical results
that would be difficult to prove without the machinery we develop
Combinatorial complexity in o-minimal geometry
In this paper we prove tight bounds on the combinatorial and topological
complexity of sets defined in terms of definable sets belonging to some
fixed definable family of sets in an o-minimal structure. This generalizes the
combinatorial parts of similar bounds known in the case of semi-algebraic and
semi-Pfaffian sets, and as a result vastly increases the applicability of
results on combinatorial and topological complexity of arrangements studied in
discrete and computational geometry. As a sample application, we extend a
Ramsey-type theorem due to Alon et al., originally proved for semi-algebraic
sets of fixed description complexity to this more general setting.Comment: 25 pages. Revised version. To appear in the Proc. London Math. So
Random Sampling in Computational Algebra: Helly Numbers and Violator Spaces
This paper transfers a randomized algorithm, originally used in geometric
optimization, to computational problems in commutative algebra. We show that
Clarkson's sampling algorithm can be applied to two problems in computational
algebra: solving large-scale polynomial systems and finding small generating
sets of graded ideals. The cornerstone of our work is showing that the theory
of violator spaces of G\"artner et al.\ applies to polynomial ideal problems.
To show this, one utilizes a Helly-type result for algebraic varieties. The
resulting algorithms have expected runtime linear in the number of input
polynomials, making the ideas interesting for handling systems with very large
numbers of polynomials, but whose rank in the vector space of polynomials is
small (e.g., when the number of variables and degree is constant).Comment: Minor edits, added two references; results unchange
- …