71 research outputs found
Computational linear algebra over finite fields
We present here algorithms for efficient computation of linear algebra
problems over finite fields
Counting Solutions of a Polynomial System Locally and Exactly
We propose a symbolic-numeric algorithm to count the number of solutions of a
polynomial system within a local region. More specifically, given a
zero-dimensional system , with
, and a polydisc
, our method aims to certify the existence
of solutions (counted with multiplicity) within the polydisc.
In case of success, it yields the correct result under guarantee. Otherwise,
no information is given. However, we show that our algorithm always succeeds if
is sufficiently small and well-isolating for a -fold
solution of the system.
Our analysis of the algorithm further yields a bound on the size of the
polydisc for which our algorithm succeeds under guarantee. This bound depends
on local parameters such as the size and multiplicity of as well
as the distances between and all other solutions. Efficiency of
our method stems from the fact that we reduce the problem of counting the roots
in of the original system to the problem of solving a
truncated system of degree . In particular, if the multiplicity of
is small compared to the total degrees of the polynomials ,
our method considerably improves upon known complete and certified methods.
For the special case of a bivariate system, we report on an implementation of
our algorithm, and show experimentally that our algorithm leads to a
significant improvement, when integrated as inclusion predicate into an
elimination method
Simplicial blowups and discrete normal surfaces in simpcomp
simpcomp is an extension to GAP, the well known system for computational
discrete algebra. It allows the user to work with simplicial complexes. In the
latest version, support for simplicial blowups and discrete normal surfaces was
added, both features unique to simpcomp. Furthermore, new functions for
constructing certain infinite series of triangulations have been implemented
and interfaces to other software packages have been improved to previous
versions.Comment: 10 page
In Memory of Vladimir Gerdt
Center for Computational Methods in Applied Mathematics of RUDN, Professor V.P. Gerdt, whose passing was a great loss to the scientific center and the computer algebra community. The article provides biographical information about V.P. Gerdt, talks about his contribution to the development of computer algebra in Russia and the world. At the end there are the author’s personal memories of V.P. Gerdt.Настоящая статья - мемориальная, она посвящена памяти руководителя научного центра вычислительных методов в прикладной математике РУДН, профессора В.П. Гердта, чей уход стал невосполнимой потерей для научного центра и всего сообщества компьютерной алгебры. В статье приведены биографические сведения о В.П. Гердте, рассказано о его вкладе в развитие компьютерной алгебры в России и мире. В конце приведены личные воспоминания автора о В.П. Гердте
Fast in-place accumulated bilinear formulae
Bilinear operations are ubiquitous in computer science and in particular in
computer algebra and symbolic computation. One of the most fundamental
arithmetic operation is the multiplication, and when applied to, e.g.,
polynomials or matrices, its result is a bilinear function of its inputs. In
terms of arithmetic operations, many sub-quadratic (resp. sub-cubic) algorithms
were developed for these tasks. But these fast algorithms come at the expense
of (potentially large) extra temporary space to perform the computation. On the
contrary, classical, quadratic (resp. cubic) algorithms, when computed
sequentially, quite often require very few (constant) extra registers. Further
work then proposed simultaneously ``fast'' and ``in-place'' algorithms, for
both matrix and polynomial operations We here propose algorithms to extend the
latter line of work for accumulated algorithms arising from a bilinear formula.
Indeed one of the main ingredient of the latter line of work is to use the
(free) space of the output as intermediate storage. When the result has to be
accumulated, i.e., if the output is also part of the input, this free space
thus does not even exist. To be able to design accumulated in-place algorithm
we thus relax the in-place model to allow algorithms to also modify their
input, therefore to use them as intermediate storage for instance, provided
that they are restored to their initial state after completion of the
procedure. This is in fact a natural possibility in many programming
environments. Furthermore, this restoration allows for recursive combinations
of such procedures, as the (non concurrent) recursive calls will not mess-up
the state of their callers. We propose here a generic technique transforming
any bilinear algorithm into an in-place algorithm under this model. This then
directly applies to polynomial and matrix multiplication algorithms, including
fast ones
Near NP-Completeness for Detecting p-adic Rational Roots in One Variable
We show that deciding whether a sparse univariate polynomial has a p-adic
rational root can be done in NP for most inputs. We also prove a
polynomial-time upper bound for trinomials with suitably generic p-adic Newton
polygon. We thus improve the best previous complexity upper bound of EXPTIME.
We also prove an unconditional complexity lower bound of NP-hardness with
respect to randomized reductions for general univariate polynomials. The best
previous lower bound assumed an unproved hypothesis on the distribution of
primes in arithmetic progression. We also discuss how our results complement
analogous results over the real numbers.Comment: 8 pages in 2 column format, 1 illustration. Submitted to a conferenc
HPC-GAP: engineering a 21st-century high-performance computer algebra system
Symbolic computation has underpinned a number of key advances in Mathematics and Computer Science. Applications are typically large and potentially highly parallel, making them good candidates for parallel execution at a variety of scales from multi-core to high-performance computing systems. However, much existing work on parallel computing is based around numeric rather than symbolic computations. In particular, symbolic computing presents particular problems in terms of varying granularity and irregular task sizes thatdo not match conventional approaches to parallelisation. It also presents problems in terms of the structure of the algorithms and data.
This paper describes a new implementation of the free open-source GAP computational algebra system that places parallelism at the heart of the design, dealing with the key scalability and cross-platform portability problems. We provide three system layers that deal with the three most important classes of hardware: individual shared memory
multi-core nodes, mid-scale distributed clusters of (multi-core) nodes, and full-blown HPC systems, comprising large-scale tightly-connected networks of multi-core nodes. This requires us to develop new cross-layer programming abstractions in the form of new domain-specific skeletons that allow us to seamlessly target different hardware levels. Our results show that, using our approach, we can achieve good scalability and speedups for two realistic exemplars, on high-performance systems comprising up to 32,000 cores, as well as on ubiquitous multi-core systems and distributed clusters. The work reported here paves the way towards full scale exploitation of symbolic computation by high-performance computing systems, and we demonstrate the potential with two major case studies
- …