2,272 research outputs found
A Purely Functional Computer Algebra System Embedded in Haskell
We demonstrate how methods in Functional Programming can be used to implement
a computer algebra system. As a proof-of-concept, we present the
computational-algebra package. It is a computer algebra system implemented as
an embedded domain-specific language in Haskell, a purely functional
programming language. Utilising methods in functional programming and prominent
features of Haskell, this library achieves safety, composability, and
correctness at the same time. To demonstrate the advantages of our approach, we
have implemented advanced Gr\"{o}bner basis algorithms, such as Faug\`{e}re's
and , in a composable way.Comment: 16 pages, Accepted to CASC 201
Computational Arithmetic Geometry I: Sentences Nearly in the Polynomial Hierarchy
We consider the average-case complexity of some otherwise undecidable or open
Diophantine problems. More precisely, consider the following: (I) Given a
polynomial f in Z[v,x,y], decide the sentence \exists v \forall x \exists y
f(v,x,y)=0, with all three quantifiers ranging over N (or Z). (II) Given
polynomials f_1,...,f_m in Z[x_1,...,x_n] with m>=n, decide if there is a
rational solution to f_1=...=f_m=0. We show that, for almost all inputs,
problem (I) can be done within coNP. The decidability of problem (I), over N
and Z, was previously unknown. We also show that the Generalized Riemann
Hypothesis (GRH) implies that, for almost all inputs, problem (II) can be done
via within the complexity class PP^{NP^NP}, i.e., within the third level of the
polynomial hierarchy. The decidability of problem (II), even in the case m=n=2,
remains open in general.
Along the way, we prove results relating polynomial system solving over C, Q,
and Z/pZ. We also prove a result on Galois groups associated to sparse
polynomial systems which may be of independent interest. A practical
observation is that the aforementioned Diophantine problems should perhaps be
avoided in the construction of crypto-systems.Comment: Slight revision of final journal version of an extended abstract
which appeared in STOC 1999. This version includes significant corrections
and improvements to various asymptotic bounds. Needs cjour.cls to compil
A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs
A -birthday repetition of a
two-prover game is a game in which the two provers are sent
random sets of questions from of sizes and respectively.
These two sets are sampled independently uniformly among all sets of questions
of those particular sizes. We prove the following birthday repetition theorem:
when satisfies some mild conditions, decreases exponentially in where is the total number of
questions. Our result positively resolves an open question posted by Aaronson,
Impagliazzo and Moshkovitz (CCC 2014).
As an application of our birthday repetition theorem, we obtain new
fine-grained hardness of approximation results for dense CSPs. Specifically, we
establish a tight trade-off between running time and approximation ratio for
dense CSPs by showing conditional lower bounds, integrality gaps and
approximation algorithms. In particular, for any sufficiently large and for
every , we show the following results:
- We exhibit an -approximation algorithm for dense Max -CSPs
with alphabet size via -level of Sherali-Adams relaxation.
- Through our birthday repetition theorem, we obtain an integrality gap of
for -level Lasserre relaxation for fully-dense Max
-CSP.
- Assuming that there is a constant such that Max 3SAT cannot
be approximated to within of the optimal in sub-exponential
time, our birthday repetition theorem implies that any algorithm that
approximates fully-dense Max -CSP to within a factor takes
time, almost tightly matching the algorithmic
result based on Sherali-Adams relaxation.Comment: 45 page
Complexity of C_k-Coloring in Hereditary Classes of Graphs
For a graph F, a graph G is F-free if it does not contain an induced subgraph isomorphic to F. For two graphs G and H, an H-coloring of G is a mapping f:V(G) -> V(H) such that for every edge uv in E(G) it holds that f(u)f(v)in E(H). We are interested in the complexity of the problem H-Coloring, which asks for the existence of an H-coloring of an input graph G. In particular, we consider H-Coloring of F-free graphs, where F is a fixed graph and H is an odd cycle of length at least 5. This problem is closely related to the well known open problem of determining the complexity of 3-Coloring of P_t-free graphs.
We show that for every odd k >= 5 the C_k-Coloring problem, even in the precoloring-extension variant, can be solved in polynomial time in P_9-free graphs. On the other hand, we prove that the extension version of C_k-Coloring is NP-complete for F-free graphs whenever some component of F is not a subgraph of a subdivided claw
On the Topological Characterization of Near Force-Free Magnetic Fields, and the work of late-onset visually-impaired Topologists
The Giroux correspondence and the notion of a near force-free magnetic field
are used to topologically characterize near force-free magnetic fields which
describe a variety of physical processes, including plasma equilibrium. As a
byproduct, the topological characterization of force-free magnetic fields
associated with current-carrying links, as conjectured by Crager and Kotiuga,
is shown to be necessary and conditions for sufficiency are given. Along the
way a paradox is exposed: The seemingly unintuitive mathematical tools, often
associated to higher dimensional topology, have their origins in three
dimensional contexts but in the hands of late-onset visually impaired
topologists. This paradox was previously exposed in the context of algorithms
for the visualization of three-dimensional magnetic fields. For this reason,
the paper concludes by developing connections between mathematics and cognitive
science in this specific context.Comment: 20 pages, no figures, a paper which was presented at a conference in
honor of the 60th birthdays of Alberto Valli and Paolo Secci. The current
preprint is from December 2014; it has been submitted to an AIMS journa
On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A Quantitative Analysis
Despite the linearity of its encoding, compressed sensing may be used to
provide a limited form of data protection when random encoding matrices are
used to produce sets of low-dimensional measurements (ciphertexts). In this
paper we quantify by theoretical means the resistance of the least complex form
of this kind of encoding against known-plaintext attacks. For both standard
compressed sensing with antipodal random matrices and recent multiclass
encryption schemes based on it, we show how the number of candidate encoding
matrices that match a typical plaintext-ciphertext pair is so large that the
search for the true encoding matrix inconclusive. Such results on the practical
ineffectiveness of known-plaintext attacks underlie the fact that even
closely-related signal recovery under encoding matrix uncertainty is doomed to
fail.
Practical attacks are then exemplified by applying compressed sensing with
antipodal random matrices as a multiclass encryption scheme to signals such as
images and electrocardiographic tracks, showing that the extracted information
on the true encoding matrix from a plaintext-ciphertext pair leads to no
significant signal recovery quality increase. This theoretical and empirical
evidence clarifies that, although not perfectly secure, both standard
compressed sensing and multiclass encryption schemes feature a noteworthy level
of security against known-plaintext attacks, therefore increasing its appeal as
a negligible-cost encryption method for resource-limited sensing applications.Comment: IEEE Transactions on Information Forensics and Security, accepted for
publication. Article in pres
Worst-Case Scenarios for Greedy, Centrality-Based Network Protection Strategies
The task of allocating preventative resources to a computer network in order
to protect against the spread of viruses is addressed. Virus spreading dynamics
are described by a linearized SIS model and protection is framed by an
optimization problem which maximizes the rate at which a virus in the network
is contained given finite resources. One approach to problems of this type
involve greedy heuristics which allocate all resources to the nodes with large
centrality measures. We address the worst case performance of such greedy
algorithms be constructing networks for which these greedy allocations are
arbitrarily inefficient. An example application is presented in which such a
worst case network might arise naturally and our results are verified
numerically by leveraging recent results which allow the exact optimal solution
to be computed via geometric programming
- …