46,067 research outputs found
Computability and analysis: the legacy of Alan Turing
We discuss the legacy of Alan Turing and his impact on computability and
analysis.Comment: 49 page
Prediction Properties of Aitken's Iterated Delta^2 Process, of Wynn's Epsilon Algorithm, and of Brezinski's Iterated Theta Algorithm
The prediction properties of Aitken's iterated Delta^2 process, Wynn's
epsilon algorithm, and Brezinski's iterated theta algorithm for (formal) power
series are analyzed. As a first step, the defining recursive schemes of these
transformations are suitably rearranged in order to permit the derivation of
accuracy-through-order relationships. On the basis of these relationships, the
rational approximants can be rewritten as a partial sum plus an appropriate
transformation term. A Taylor expansion of such a transformation term, which is
a rational function and which can be computed recursively, produces the
predictions for those coefficients of the (formal) power series which were not
used for the computation of the corresponding rational approximant.Comment: 34 pages, LaTe
Mathematical Properties of a New Levin-Type Sequence Transformation Introduced by \v{C}\'{\i}\v{z}ek, Zamastil, and Sk\'{a}la. I. Algebraic Theory
\v{C}\'{\i}\v{z}ek, Zamastil, and Sk\'{a}la [J. Math. Phys. \textbf{44}, 962
- 968 (2003)] introduced in connection with the summation of the divergent
perturbation expansion of the hydrogen atom in an external magnetic field a new
sequence transformation which uses as input data not only the elements of a
sequence of partial sums, but also explicit estimates
for the truncation errors. The explicit
incorporation of the information contained in the truncation error estimates
makes this and related transformations potentially much more powerful than for
instance Pad\'{e} approximants. Special cases of the new transformation are
sequence transformations introduced by Levin [Int. J. Comput. Math. B
\textbf{3}, 371 - 388 (1973)] and Weniger [Comput. Phys. Rep. \textbf{10}, 189
- 371 (1989), Sections 7 -9; Numer. Algor. \textbf{3}, 477 - 486 (1992)] and
also a variant of Richardson extrapolation [Phil. Trans. Roy. Soc. London A
\textbf{226}, 299 - 349 (1927)]. The algebraic theory of these transformations
- explicit expressions, recurrence formulas, explicit expressions in the case
of special remainder estimates, and asymptotic order estimates satisfied by
rational approximants to power series - is formulated in terms of hitherto
unknown mathematical properties of the new transformation introduced by
\v{C}\'{\i}\v{z}ek, Zamastil, and Sk\'{a}la. This leads to a considerable
formal simplification and unification.Comment: 41 + ii pages, LaTeX2e, 0 figures. Submitted to Journal of
Mathematical Physic
Optimization as a design strategy. Considerations based on building simulation-assisted experiments about problem decomposition
In this article the most fundamental decomposition-based optimization method
- block coordinate search, based on the sequential decomposition of problems in
subproblems - and building performance simulation programs are used to reason
about a building design process at micro-urban scale and strategies are defined
to make the search more efficient. Cyclic overlapping block coordinate search
is here considered in its double nature of optimization method and surrogate
model (and metaphore) of a sequential design process. Heuristic indicators apt
to support the design of search structures suited to that method are developed
from building-simulation-assisted computational experiments, aimed to choose
the form and position of a small building in a plot. Those indicators link the
sharing of structure between subspaces ("commonality") to recursive
recombination, measured as freshness of the search wake and novelty of the
search moves. The aim of these indicators is to measure the relative
effectiveness of decomposition-based design moves and create efficient block
searches. Implications of a possible use of these indicators in genetic
algorithms are also highlighted.Comment: 48 pages. 12 figures, 3 table
Forward Analysis and Model Checking for Trace Bounded WSTS
We investigate a subclass of well-structured transition systems (WSTS), the
bounded---in the sense of Ginsburg and Spanier (Trans. AMS 1964)---complete
deterministic ones, which we claim provide an adequate basis for the study of
forward analyses as developed by Finkel and Goubault-Larrecq (Logic. Meth.
Comput. Sci. 2012). Indeed, we prove that, unlike other conditions considered
previously for the termination of forward analysis, boundedness is decidable.
Boundedness turns out to be a valuable restriction for WSTS verification, as we
show that it further allows to decide all -regular properties on the
set of infinite traces of the system
Specific "scientific" data structures, and their processing
Programming physicists use, as all programmers, arrays, lists, tuples,
records, etc., and this requires some change in their thought patterns while
converting their formulae into some code, since the "data structures" operated
upon, while elaborating some theory and its consequences, are rather: power
series and Pad\'e approximants, differential forms and other instances of
differential algebras, functionals (for the variational calculus), trajectories
(solutions of differential equations), Young diagrams and Feynman graphs, etc.
Such data is often used in a [semi-]numerical setting, not necessarily
"symbolic", appropriate for the computer algebra packages. Modules adapted to
such data may be "just libraries", but often they become specific, embedded
sub-languages, typically mapped into object-oriented frameworks, with
overloaded mathematical operations. Here we present a functional approach to
this philosophy. We show how the usage of Haskell datatypes and - fundamental
for our tutorial - the application of lazy evaluation makes it possible to
operate upon such data (in particular: the "infinite" sequences) in a natural
and comfortable manner.Comment: In Proceedings DSL 2011, arXiv:1109.032
- …