385 research outputs found
An a posteriori verification method for generalized real-symmetric eigenvalue problems in large-scale electronic state calculations
An a posteriori verification method is proposed for the generalized
real-symmetric eigenvalue problem and is applied to densely clustered
eigenvalue problems in large-scale electronic state calculations. The proposed
method is realized by a two-stage process in which the approximate solution is
computed by existing numerical libraries and is then verified in a moderate
computational time. The procedure returns intervals containing one exact
eigenvalue in each interval. Test calculations were carried out for organic
device materials, and the verification method confirms that all exact
eigenvalues are well separated in the obtained intervals. This verification
method will be integrated into EigenKernel (https://github.com/eigenkernel/),
which is middleware for various parallel solvers for the generalized eigenvalue
problem. Such an a posteriori verification method will be important in future
computational science.Comment: 15 pages, 7 figure
A Note on Solving Problem 7 of the SIAM 100-Digit Challenge Using C-XSC
C-XSC is a powerful C++ class library which simplifies the development
of selfverifying numerical software. But C-XSC is not only a development tool, it also provides a lot of predefined highly accurate routines to compute reliable bounds for the solution to standard numerical problems.
In this note we discuss the usage of a reliable linear system solver to compute the solution of problem 7 of the SIAM 100-digit challenge. To get the result we have to solve a 20 000 Ăâ 20 000 system of linear equations using interval computations. To perform this task we run our software on the advanced Linux cluster engine ALiCEnext located at the University of Wuppertal and on the high performance computer HP XC6000 at the computing center of the University of Karlsruhe.
The main purpose of this note is to demonstrate the power/weakness of our approach to solve linear interval systems with a large dense system matrix using C-XSC and to get feedback from other research groups all over the world concerned with the topic described. We are very much interested to see comparisons concerning different methods/algorithms, timings, memory consumptions, and different hardware/software
environments. It should be easy to adapt our main routine (see Section 3 below) to other programming languages, and different computing environments. Changing just one variable allows the generation of arbitrary large system matrices making it easy to do sound (reproducible and comparable) timings and to check for the largest possible system size that can be handled successfully by a specific package/environment
A Modified Staggered Correction Arithmetic with Enhanced Accuracy and Very Wide Exponent Range
A so called staggered precision arithmetic is a special kind of
a multiple precision arithmetic based on the underlying
floating point data format (typically IEEE double format)
and fast floating point operations as well as exact dot product computations.
Due to floating point limitations it is not an arbitrary precision arithmetic.
However, it typically allows computations using several hundred mantissa digits.
A set of new modified staggered arithmetics for real and
complex data as well as for real interval and
complex interval data with very wide exponent range is presented.
Some applications show
the increased accuracy of computed results compared to ordinary staggered
interval computations. The very wide exponent range of the new arithmetic
operations allows computations far beyond the IEEE data formats.
The new arithmetics would be extremly fast, if an exact dot product was
available in hardware (the fused accumulate and add instruction is only
one step in this direction)
Extension of the C-XSC Library with Scalar Products with Selectable Accuracy
The C++ class library C-XSC for scientific computing has been
extended with the possibility to compute scalar products with selectable accuracy in version 2.3.0. In previous versions, scalar products have always
been computed exactly with the help of the so-called long accumulator. Additionally, optimized floating point computation of matrix and vector operations using BLAS-routines are added in C-XSC version 2.4.0. In this article
the algorithms used and their implementations, as well as some potential
pitfalls in the compilation, are described in more detail. Additionally, the
theoretical background of the employed DotK algorithm and the necessary
modifications of the concrete implementation in C-XSC are briefly explained.
Run-time tests and numerical examples are presented as well
On the Interoperability between Interval Software
The increased appreciation of interval analysis as a powerful tool for controlling round-off errors and modelling
with uncertain data leads to a growing number of diverse interval software. Beside in some other aspects,
the available interval software differs with respect to the environment in which it operates and the provided
functionality. Some specific software tools are built on the top of other more general interval software but
there is no single environment supporting all (or most) of the available interval methods. On another side,
most recent interval applications require a combination of diverse methods. It is difficult for the end-users
to combine and manage the diversity of interval software tools, packages, and research codes, even the latter
being accessible. Two recent initiatives: [1], directed toward developing of a comprehensive full-featured library
of validated routines, and [3] intending to provide a general service framework for validated computing in
heterogeneous environment, reflect the realized necessity for an integration of the available methods and
software tools.
It is commonly understood that quality comprehensive libraries are not compiled by a single person or small
group of people over a short time [1]. Therefore, in this work we present an alternative approach based on
interval software interoperability.
While the simplest form of interoperability is the exchange of data files, we will focus on the ability to run
a particular routine executable in one environment from within another software environment, and vice-versa,
via communication protocols. We discuss the motivation, advantages and some problems that may appear in
providing interoperability between the existing interval software.
Since the general-purpose environments for scientific/technical computing like Matlab, Mathematica, Maple, etc.
have several features not attributable to the compiled languages from one side and on another side most problem
solving tools are developed in some compiled language for efficiency reasons, it is interesting to study
the possibilities for interoperability between these two kinds of interval supporting environments.
More specifically, we base our presentation on the interoperability between Mathematica [5] and external
C-XSC programs [2] via MathLink communication protocol [4]. First, we discuss the portability and reliability
of interval arithmetic in Mathematica. Then, we present MathLink technology for building external
MathLink-compatible programs. On the example of a C-XSC function for solving parametric linear systems,
called from within a Mathematica session, we demonstrate some advantages of interval software interoperability.
Namely, expanded functionality for both environments, exchanging data without using intermediate files and
without any conversion but under dynamics and interactivity in the communication, symbolic manipulation interfaces
for the compiled language software that often make access to the external functionality from within Mathematica
more convenient even than from its own native environment. Once established, MathLink connection to external
interval libraries or problem-solving software opens up an array on new possibilities for the latter.
References:
[1] G. Corliss, R. B. Kearfott, N. Nedialkov, S. Smith: Towards an Interval Subroutine Library,
Workshop on Reliable Engineering Computing, Svannah, Georgia, USA, Feb. 22-24, 2006.
[2] W. Hofschuster: C-XSC: Highlights and new developments. In: Numerical Validation in Current Hardware
Architectures. Number 08021 Dagstuhl Seminar, Internationales Begegnungs- und Forschungszentrum f"ur
Informatik, Schloss Dagstuhl, Germany, 2008.
[3] W. Luther, W. Kramer: Accurate Grid Computing, 12th GAMM-IMACS Int. Symposium on Scientific Computing,
Computer Arithmetic and Validated Numerics (SCAN 2006), Duisburg, Sept. 26-29, 2006.
[4] Ch. Miyaji, P. Abbot eds.: Mathlink: Network Programming with Mathematica, Cambridge Univ. Press, Cambridge, 2001.
[5] Wolfram Research Inc.: Mathematica, Version 5.2, Champaign, IL, 2005
The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing
Recovery of the sparsity pattern (or support) of an unknown sparse vector
from a limited number of noisy linear measurements is an important problem in
compressed sensing. In the high-dimensional setting, it is known that recovery
with a vanishing fraction of errors is impossible if the measurement rate and
the per-sample signal-to-noise ratio (SNR) are finite constants, independent of
the vector length. In this paper, it is shown that recovery with an arbitrarily
small but constant fraction of errors is, however, possible, and that in some
cases computationally simple estimators are near-optimal. Bounds on the
measurement rate needed to attain a desired fraction of errors are given in
terms of the SNR and various key parameters of the unknown vector for several
different recovery algorithms. The tightness of the bounds, in a scaling sense,
as a function of the SNR and the fraction of errors, is established by
comparison with existing information-theoretic necessary bounds. Near
optimality is shown for a wide variety of practically motivated signal models
Interval Subroutine Library Mission
We propose the collection, standardization, and distribution of a full-featured production quality library for reliable scientific computing with routines using interval techniques for use by the wide community of applications developers
On a class of optimization-based robust estimators
We consider in this paper the problem of estimating a parameter matrix from
observations which are affected by two types of noise components: (i) a sparse
noise sequence which, whenever nonzero can have arbitrarily large amplitude
(ii) and a dense and bounded noise sequence of "moderate" amount. This is
termed a robust regression problem. To tackle it, a quite general
optimization-based framework is proposed and analyzed. When only the sparse
noise is present, a sufficient bound is derived on the number of nonzero
elements in the sparse noise sequence that can be accommodated by the estimator
while still returning the true parameter matrix. While almost all the
restricted isometry-based bounds from the literature are not verifiable, our
bound can be easily computed through solving a convex optimization problem.
Moreover, empirical evidence tends to suggest that it is generally tight. If in
addition to the sparse noise sequence, the training data are affected by a
bounded dense noise, we derive an upper bound on the estimation error.Comment: To appear in IEEE Transactions on Automatic Contro
- âŠ