18 research outputs found
Development of symbolic algorithms for certain algebraic processes
This study investigates the problem of computing the exact greatest common divisor of two polynomials relative to an orthogonal basis, defined over the rational number field. The main objective of the study is to design and implement an effective and efficient symbolic algorithm for the general class of dense polynomials, given the rational number defining terms of their basis. From a general algorithm using the comrade matrix approach, the nonmodular and modular techniques are prescribed. If the coefficients of the generalized polynomials are multiprecision integers, multiprecision arithmetic will be required in the construction of the comrade matrix and the corresponding systems coefficient matrix. In addition, the application of the nonmodular elimination technique on this coefficient matrix extensively applies multiprecision rational number operations. The modular technique is employed to minimize the complexity involved in such computations. A divisor test algorithm that enables the detection of an unlucky reduction is a crucial device for an effective implementation of the modular technique. With the bound of the true solution not known a priori, the test is devised and carefully incorporated into the modular algorithm. The results illustrate that the modular algorithm illustrate its best performance for the class of relatively prime polynomials. The empirical computing time results show that the modular algorithm is markedly superior to the nonmodular algorithms in the case of sufficiently dense Legendre basis polynomials with a small GCD solution. In the case of dense Legendre basis polynomials with a big GCD solution, the modular algorithm is significantly superior to the nonmodular algorithms in higher degree polynomials. For more definitive conclusions, the computing time functions of the algorithms that are presented in this report have been worked out. Further investigations have also been suggested
On the Factor Refinement Principle and its Implementation on Multicore Architectures
The factor refinement principle turns a partial factorization of integers (or polynomi als) into a more complete factorization represented by basis elements and exponents, with basis elements that are pairwise coprime.
There are lots of applications of this refinement technique such as simplifying systems of polynomial inequations and, more generally, speeding up certain algebraic algorithms by eliminating redundant expressions that may occur during intermediate computations.
Successive GCD computations and divisions are used to accomplish this task until all the basis elements are pairwise coprime. Moreover, square-free factorization (which is the first step of many factorization algorithms) is used to remove the repeated patterns from each input element. Differentiation, division and GCD calculation op erations are required to complete this pre-processing step. Both factor refinement and square-free factorization often rely on plain (quadratic) algorithms for multipli cation but can be substantially improved with asymptotically fast multiplication on sufficiently large input.
In this work, we review the working principles and complexity estimates of the factor refinement, in case of plain arithmetic, as well as asymptotically fast arithmetic. Following this review process, we design, analyze and implement parallel adaptations of these factor refinement algorithms. We consider several algorithm optimization techniques such as data locality analysis, balancing subproblems, etc. to fully exploit modern multicore architectures. The Cilk++ implementation of our parallel algorithm based on the augment refinement principle of Bach, Driscoll and Shallit achieves linear speedup for input data of sufficiently large size
On the factorization of polynomials over algebraic fields
SIGLEAvailable from British Library Document Supply Centre- DSC:DX86869 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
The Design and Implementation of a High-Performance Polynomial System Solver
This thesis examines the algorithmic and practical challenges of solving systems of polynomial equations. We discuss the design and implementation of triangular decomposition to solve polynomials systems exactly by means of symbolic computation.
Incremental triangular decomposition solves one equation from the input list of polynomials at a time. Each step may produce several different components (points, curves, surfaces, etc.) of the solution set. Independent components imply that the solving process may proceed on each component concurrently. This so-called component-level parallelism is a theoretical and practical challenge characterized by irregular parallelism. Parallelism is not an algorithmic property but rather a geometrical property of the particular input system’s solution set.
Despite these challenges, we have effectively applied parallel computing to triangular decomposition through the layering and cooperation of many parallel code regions. This parallel computing is supported by our generic object-oriented framework based on the dynamic multithreading paradigm. Meanwhile, the required polynomial algebra is sup- ported by an object-oriented framework for algebraic types which allows type safety and mathematical correctness to be determined at compile-time.
Our software is implemented in C/C++ and have extensively tested the implementation for correctness and performance on over 3000 polynomial systems that have arisen in practice.
The parallel framework has been re-used in the implementation of Hensel factorization as a parallel pipeline to compute roots of a polynomial with multivariate power series coefficients. Hensel factorization is one step toward computing the non-trivial limit points of quasi-components
Feasible arithmetic computations: Valiant's hypothesis
An account of Valiant's theory of p-computable versus p-definable polynomials, an arithmetic analogue of the Boolean theory of P versus NP, is presented, with detailed proofs of Valiant's central results
Monitoring and Assessment of Environmental Quality in Coastal Ecosystems
Coastal ecosystems are dynamic, complex, and often fragile transition environments between land and oceans. They are exclusive habitats for a broad range of living organisms, functioning as havens for biodiversity and providing several important ecological services that link terrestrial, freshwater, and marine environments. Humans living in coastal zones have been strongly dependent on these ecosystems as a source of food, physical protection against storms and advancing sea, and a range of human activities that generate economic income. Notwithstanding, the intensification of human activities in coastal areas of the recent decades, as well as the global climatic changes and coastal erosion processes of the present, have had detrimental impacts on these environments. Maintaining the structural and functional integrity of these environments and recovering an ecological balance or mitigating disturbances in systems under the influence of such stressors are complex tasks, only possible through the implementation of monitoring programs and by assessing their environmental quality. In this book, distinct approaches to environmental quality monitoring and assessment of coastal environments are presented, focused on abiotic and biotic compartments, and using tools that range from ecological levels of organization to the sub-organismal and the ecosystem levels
Recommended from our members
Numerical issues and computational problems in algebraic control theory
The work of this thesis concerns computational issues arising from various fields of Algebraic Control Theory. Efficient algorithms covering the following classes of problems are developed.
(i) Exterior Algebra Computations: For given matrices [Please see formulas inside thesis] algorithms achieving the computation of [Please see formulas inside thesis] are formulated. An algorithm for the evaluation of Plucker matrices is also proposed. Most of these algorithms are used in the development of a unifying numerical algorithm for the solution of the Determinantal Assignment Problem.
(ii) Numerical Techniques for handling nonqeneric computations: Several numerical tools for the diagnosis of certain properties in an "almost sense", and the definition of procedures attaining the termination of algorithms are developed.
(iii) Evaluation of the Greatest Common Divisor of polynomials: A new numerical algorithm for the evaluation of the greatest common divisor of any set of polynomials is formulated.
(iv) Almost Zero Computations: Algorithms achieving the evaluation of the Prime almost zero of a polynomial set and the computation of the zero radius are given. Useful comments about the achievement of improved bounds for the zero-trapping region are also presented
Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code
Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined