196 research outputs found
An introspective algorithm for the integer determinant
We present an algorithm computing the determinant of an integer matrix A. The
algorithm is introspective in the sense that it uses several distinct
algorithms that run in a concurrent manner. During the course of the algorithm
partial results coming from distinct methods can be combined. Then, depending
on the current running time of each method, the algorithm can emphasize a
particular variant. With the use of very fast modular routines for linear
algebra, our implementation is an order of magnitude faster than other existing
implementations. Moreover, we prove that the expected complexity of our
algorithm is only O(n^3 log^{2.5}(n ||A||)) bit operations in the dense case
and O(Omega n^{1.5} log^2(n ||A||) + n^{2.5}log^3(n||A||)) in the sparse case,
where ||A|| is the largest entry in absolute value of the matrix and Omega is
the cost of matrix-vector multiplication in the case of a sparse matrix.Comment: Published in Transgressive Computing 2006, Grenade : Espagne (2006
Towards an exact adaptive algorithm for the determinant of a rational matrix
In this paper we propose several strategies for the exact computation of the
determinant of a rational matrix. First, we use the Chinese Remaindering
Theorem and the rational reconstruction to recover the rational determinant
from its modular images. Then we show a preconditioning for the determinant
which allows us to skip the rational reconstruction process and reconstruct an
integer result. We compare those approaches with matrix preconditioning which
allow us to treat integer instead of rational matrices. This allows us to
introduce integer determinant algorithms to the rational determinant problem.
In particular, we discuss the applicability of the adaptive determinant
algorithm of [9] and compare it with the integer Chinese Remaindering scheme.
We present an analysis of the complexity of the strategies and evaluate their
experimental performance on numerous examples. This experience allows us to
develop an adaptive strategy which would choose the best solution at the run
time, depending on matrix properties. All strategies have been implemented in
LinBox linear algebra library
Generic design of Chinese remaindering schemes
We propose a generic design for Chinese remainder algorithms. A Chinese
remainder computation consists in reconstructing an integer value from its
residues modulo non coprime integers. We also propose an efficient linear data
structure, a radix ladder, for the intermediate storage and computations. Our
design is structured into three main modules: a black box residue computation
in charge of computing each residue; a Chinese remaindering controller in
charge of launching the computation and of the termination decision; an integer
builder in charge of the reconstruction computation. We then show that this
design enables many different forms of Chinese remaindering (e.g.
deterministic, early terminated, distributed, etc.), easy comparisons between
these forms and e.g. user-transparent parallelism at different parallel grains
Faster Geometric Algorithms via Dynamic Determinant Computation
The computation of determinants or their signs is the core procedure in many
important geometric algorithms, such as convex hull, volume and point location.
As the dimension of the computation space grows, a higher percentage of the
total computation time is consumed by these computations. In this paper we
study the sequences of determinants that appear in geometric algorithms. The
computation of a single determinant is accelerated by using the information
from the previous computations in that sequence.
We propose two dynamic determinant algorithms with quadratic arithmetic
complexity when employed in convex hull and volume computations, and with
linear arithmetic complexity when used in point location problems. We implement
the proposed algorithms and perform an extensive experimental analysis. On one
hand, our analysis serves as a performance study of state-of-the-art
determinant algorithms and implementations. On the other hand, we demonstrate
the supremacy of our methods over state-of-the-art implementations of
determinant and geometric algorithms. Our experimental results include a 20 and
78 times speed-up in volume and point location computations in dimension 6 and
11 respectively.Comment: 29 pages, 8 figures, 3 table
Adaptive Triangular System Solving
Large-scale applications and software systems are
getting increasingly complex. To deal with this complexity, those
systems must manage themselves in accordance with high-level guidance
from humans. Adaptive and hybrid algorithms enable this
self-management of resources and structured inputs.
In this talk, we first propose a classification of the different
notions of adaptivity. For us, an algorithm is adaptive (or a
poly-algorithm) when there is a choice at a high level between at
least two distinct algorithms, each of which could solve the same
problem. The choice is strategic, not tactical. It is motivated by
an increase of the performance of the execution, depending on both
input/output data and computing resources.
Then we propose a new adaptive algorithm for the exact simultaneous
resolution of several triangular systems over finite fields. The
resolution of such systems is e.g. one of the two main operations in block
Gaussian elimination. For solving triangular systems over finite
fields, the block algorithm reduces to matrix multiplication and
achieves the best known algebraic complexity. Exact matrix
multiplication, together with matrix factorizations, over finite
fields can now be performed at the speed of the highly optimized
numerical BLAS routines. This has been established by the FFLAS and
FFPACK libraries. In this talk we propose several practicable variants
solving these systems: a pure recursive version, a reduction to the
numerical dtrsm routine and a delaying of the modulus operation. Then
a cascading scheme is proposed to merge these variants into an
adaptive sequential algorithm.
We then propose a parallelization of this resolution. The adaptive
sequential algorithm is not the best parallel algorithm since its
recursion induces a dependancy. A better parallel algorithm would be
to first invert the matrix and then to multiply this inverse by the
right hand side. Unfortunately the latter requires more total
operations than the adaptive algorithm. We thus propose a coupling of
the sequential algorithm and of the parallel one in order to get the
best performances on any number of processors. The resulting cascading
is then an adaptation to resources.
This shows that the same process has been used both for adaptation to
data and to resources. We thus propose a generic framework for the
automatic adaptation of algorithms using recursive cascading
Modern Approaches to Exact Diagonalization and Selected Configuration Interaction with the Adaptive Sampling CI Method.
Recent advances in selected configuration interaction methods have made them competitive with the most accurate techniques available and, hence, creating an increasingly powerful tool for solving quantum Hamiltonians. In this work, we build on recent advances from the adaptive sampling configuration interaction (ASCI) algorithm. We show that a useful paradigm for generating efficient selected CI/exact diagonalization algorithms is driven by fast sorting algorithms, much in the same way iterative diagonalization is based on the paradigm of matrix vector multiplication. We present several new algorithms for all parts of performing a selected CI, which includes new ASCI search, dynamic bit masking, fast orbital rotations, fast diagonal matrix elements, and residue arrays. The ASCI search algorithm can be used in several different modes, which includes an integral driven search and a coefficient driven search. The algorithms presented here are fast and scalable, and we find that because they are built on fast sorting algorithms they are more efficient than all other approaches we considered. After introducing these techniques, we present ASCI results applied to a large range of systems and basis sets to demonstrate the types of simulations that can be practically treated at the full-CI level with modern methods and hardware, presenting double- and triple-ζ benchmark data for the G1 data set. The largest of these calculations is Si2H6 which is a simulation of 34 electrons in 152 orbitals. We also present some preliminary results for fast deterministic perturbation theory simulations that use hash functions to maintain high efficiency for treating large basis sets
High-dimensional polytopes defined by oracles: algorithms, computations and applications
Η επεξεργασία και ανάλυση γεωμετρικών δεδομένων σε υψηλές διαστάσεις
διαδραματίζει ένα θεμελιώδη ρόλο σε διάφορους κλάδους της επιστήμης και της
μηχανικής. Τις τελευταίες δεκαετίες έχουν αναπτυχθεί πολλοί επιτυχημένοι
γεωμετρικοί αλγόριθμοι σε 2 και 3 διαστάσεις. Ωστόσο, στις περισσότερες
περιπτώσεις, οι επιδόσεις τους σε υψηλότερες διαστάσεις δεν είναι
ικανοποιητικές. Αυτή η συμπεριφορά είναι ευρέως γνωστή ως κατάρα των μεγάλων
διαστάσεων (curse of dimensionality).
Δυο πλαίσια λύσης που έχουν υιοθετηθεί για να ξεπεραστεί αυτή η δυσκολία είναι
η εκμετάλλευση της ειδικής δομής των δεδομένων, όπως σε περιπτώσεις αραιών
(sparse) δεδομένων ή στην περίπτωση που τα δεδομένα βρίσκονται σε χώρο
χαμηλότερης διάστασης, και ο σχεδιασμός προσεγγιστικών αλγορίθμων. Στη διατριβή
αυτή μελετάμε προβλήματα μέσα σε αυτά τα πλαίσια.
Το κύριο ερευνητικό πεδίο της παρούσας εργασίας είναι η διακριτή και
υπολογιστικής γεωμετρία και οι σχέσεις της με τους κλάδους της επιστήμης των
υπολογιστών και τα εφαρμοσμένα μαθηματικά, όπως είναι η θεωρία πολυτόπων, οι
υλοποιήσεις
αλγορίθμων, οι πιθανοθεωρητικοί γεωμετρικοί αλγόριθμοι, η υπολογιστική
αλγεβρική γεωμετρία και η βελτιστοποίηση. Τα θεμελιώδη γεωμετρικά αντικείμενα
της μελέτης μας είναι τα πολύτοπα, και οι βασικές τους ιδιότητες είναι η
κυρτότητα και ότι ορίζονται από ένα μαντείο (oracle) σε ένα χώρο υψηλής
διάστασης.
Η επεξεργασία και ανάλυση γεωμετρικών δεδομένων σε υψηλές διαστάσεις
διαδραματίζει ένα θεμελιώδη ρόλο σε διάφορους κλάδους της επιστήμης και της
μηχανικής. Τις τελευταίες δεκαετίες έχουν αναπτυχθεί πολλοί επιτυχημένοι
γεωμετρικοί αλγόριθμοι σε 2 και 3 διαστάσεις. Ωστόσο, στις περισσότερες
περιπτώσεις, οι επιδόσεις τους σε υψηλότερες διαστάσεις δεν είναι
ικανοποιητικές. Δυο πλαίσια λύσης που έχουν υιοθετηθεί για να ξεπεραστεί αυτή η
δυσκολία είναι η εκμετάλλευση της ειδικής δομής των δεδομένων, όπως σε
περιπτώσεις αραιών (sparse) δεδομένων ή στην περίπτωση που τα δεδομένα
βρίσκονται σε χώρο χαμηλότερης διάστασης, και ο σχεδιασμός προσεγγιστικών
αλγορίθμων. Το κύριο ερευνητικό πεδίο της παρούσας εργασίας είναι η διακριτή
και υπολογιστικής γεωμετρία και οι σχέσεις της με τους κλάδους της επιστήμης
των υπολογιστών και τα εφαρμοσμένα μαθηματικά. Η συμβολή αυτής της διατριβής
είναι τριπλή. Πρώτον, στο σχεδιασμό και την ανάλυση των γεωμετρικών αλγορίθμων
για προβλήματα σε μεγάλες διαστάσεις. Δεύτερον, θεωρητικά αποτελέσματα σχετικά
με το συνδυαστικό χαρακτηρισμό βασικών οικογενειών πολυτόπων. Τρίτον, η
εφαρμογή και πειραματική ανάλυση των προτεινόμενων αλγορίθμων και μεθόδων. Η
ανάπτυξη λογισμικού ανοιχτού κώδικα, που είναι διαθέσιμο στο κοινό και
βασίζεται και επεκτείνει διαδεδομένες γεωμετρικές και αλγεβρικές βιβλιοθήκες
λογισμικού, όπως η CGAL και το polymake.The processing and analysis of high dimensional geometric data plays a
fundamental role in disciplines of science and engineering. The last decades
many successful geometric algorithms has been developed in 2 and 3 dimensions.
However, in most cases their performance in higher dimensions is poor. This
behavior is commonly called the curse of dimensionality. A solution framework
adopted for the healing of the curse of dimensionality is the exploitation of
the special structure of the data, such as sparsity or low intrinsic dimension
and the design of approximation algorithms. The main research area of this
thesis is discrete and computational geometry and its connections to branches
of computer science and applied mathematics. The contribution of this thesis is
threefold. First, the design and analysis of geometric algorithms for problems
concerning high-dimensional, convex polytopes, such as convex hull and volume
computation and their applications to computational algebraic geometry and
optimization. Second, the establishment of combinatorial characterization
results for essential polytope families. Third, the implementation and
experimental analysis of the proposed algorithms and methods. The developed
software is opensource, publicly available and builds on and extends
state-of-the-art geometric and algebraic software libraries such as CGAL and
polymake
Geobase Information System Impacts on Space Image Formats
As Geobase Information Systems increase in number, size and complexity, the format compatability of satellite remote sensing data becomes increasingly more important. Because of the vast and continually increasing quantity of data available from remote sensing systems the utility of these data is increasingly dependent on the degree to which their formats facilitate, or hinder, their incorporation into Geobase Information Systems. To merge satellite data into a geobase system requires that they both have a compatible geographic referencing system. Greater acceptance of satellite data by the user community will be facilitated if the data are in a form which most readily corresponds to existing geobase data structures. The conference addressed a number of specific topics and made recommendations
- …