3,022 research outputs found
Geometrical Ambiguity of Pair Statistics. I. Point Configurations
Point configurations have been widely used as model systems in condensed
matter physics, materials science and biology. Statistical descriptors such as
the -body distribution function is usually employed to characterize
the point configurations, among which the most extensively used is the pair
distribution function . An intriguing inverse problem of practical
importance that has been receiving considerable attention is the degree to
which a point configuration can be reconstructed from the pair distribution
function of a target configuration. Although it is known that the pair-distance
information contained in is in general insufficient to uniquely determine
a point configuration, this concept does not seem to be widely appreciated and
general claims of uniqueness of the reconstructions using pair information have
been made based on numerical studies. In this paper, we introduce the idea of
the distance space, called the space. The pair distances of a
specific point configuration are then represented by a single point in the
space. We derive the conditions on the pair distances that can be
associated with a point configuration, which are equivalent to the
realizability conditions of the pair distribution function . Moreover, we
derive the conditions on the pair distances that can be assembled into distinct
configurations. These conditions define a bounded region in the
space. By explicitly constructing a variety of degenerate point configurations
using the space, we show that pair information is indeed
insufficient to uniquely determine the configuration in general. We also
discuss several important problems in statistical physics based on the
space.Comment: 28 pages, 8 figure
Algebraic Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test
are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques
Black Box Absolute Reconstruction for Sums of Powers of Linear Forms
We study the decomposition of multivariate polynomials as sums of powers of linear forms. We give a randomized algorithm for the following problem: If a homogeneous polynomial f ? K[x_1. . .x_n] (where K ? ?) of degree d is given as a blackbox, decide whether it can be written as a linear combination of d-th powers of linearly independent complex linear forms. The main novel features of the algorithm are:
- For d = 3, we improve by a factor of n on the running time from the algorithm in [Pascal Koiran and Mateusz Skomra, 2021]. The price to be paid for this improvement is that the algorithm now has two-sided error.
- For d > 3, we provide the first randomized blackbox algorithm for this problem that runs in time poly(n,d) (in an algebraic model where only arithmetic operations and equality tests are allowed). Previous algorithms for this problem [Kayal, 2011] as well as most of the existing reconstruction algorithms for other classes appeal to a polynomial factorization subroutine. This requires extraction of complex polynomial roots at unit cost and in standard models such as the unit-cost RAM or the Turing machine this approach does not yield polynomial time algorithms.
- For d > 3, when f has rational coefficients (i.e. K = ?), the running time of the blackbox algorithm is polynomial in n,d and the maximal bit size of any coefficient of f. This yields the first algorithm for this problem over ? with polynomial running time in the bit model of computation. These results are true even when we replace ? by ?. We view the problem as a tensor decomposition problem and use linear algebraic methods such as checking the simultaneous diagonalisability of the slices of a tensor. The number of such slices is exponential in d. But surprisingly, we show that after a random change of variables, computing just 3 special slices is enough. We also show that our approach can be extended to the computation of the actual decomposition. In forthcoming work we plan to extend these results to overcomplete decompositions, i.e., decompositions in more than n powers of linear forms
- …