745 research outputs found
Coupled structural, thermal, phase-change and electromagnetic analysis for superconductors, volume 1
This research program has dealt with the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements; (2) finite element modeling of the electromagnetic problem; (3) coupling of thermal and mechanical effects; and (4) computer implementation and solution of the superconductivity transition problem. The research was carried out over the period September 1988 through March 1993. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles; (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements; and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects; and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The grant has fully supported the thesis work of one doctoral student (James Schuler, who started on January 1989 and completed on January 1993), and partly supported another thesis (Carmelo Militello, who started graduate work on January 1988 completing on August 1991). Twenty-three publications have acknowledged full or part support from this grant, with 16 having appeared in archival journals and 3 in edited books or proceedings
A Fully Coupled Immersed Finite Element Method for Fluid Structure Interaction via the Deal.II Library
We present the implementation of a solution scheme for fluid-structure interaction problems
via the finite element software library deal.II. The solution scheme is an immersed finite element
method in which two independent discretizations are used for the fluid and immersed deformable body.
In this type of formulation the support of the equations of motion of the fluid is extended to cover
the union of the solid and fluid domains. The equations of motion over the extended solution domain
govern the flow of a fluid under the action of a body force field. This body force field informs the fluid
of the presence of the immersed solid. The velocity field of the immersed solid is the restriction over the
immersed domain of the velocity field in the extended equations of motion. The focus of this paper is to
show how the determination of the motion of the immersed domain is carried out in practice. We show
that our implementation is general, that is, it is not dependent on a specific choice of the finite element
spaces over the immersed solid and the extended fluid domains. We present some preliminary results
concerning the accuracy of the proposed method
Recommended from our members
ERES Methodology and Approximate Algebraic Computations
The area of approximate algebraic computations is a fast growing area in modern computer algebra which has attracted many researchers in recent years. Amongst the various algebraic computations, the computation of the Greatest Common Divisor (GCD) and the Least Common Multiple (LCM) of a set of polynomials are challenging problems that arise from several applications in applied mathematics and engineering. Several methods have been proposed for the computation of the GCD of polynomials using tools and notions either from linear algebra or linear systems theory. Amongst these, a matrix-based method which relies on the properties of the GCD as an invariant of the original set of polynomials under elementary row transformations and shifting elements in the rows of a matrix, shows interesting properties in relation to the problem of the GCD of sets of many polynomials. These transformations are referred to as Extended-Row-Equivalence and Shifting (ERES) operations and their iterative application to a basis matrix, which is formed directly from the coefficients of the given polynomials, formulates the ERES method for the computation of the GCD of polynomials and establishes the basic principles of the ERES methodology. The main objective of the present thesis concerns the improvement of the ERES methodology and its use for the efficient computation of the GCD and LCM of sets of several univariate polynomials with parameter uncertainty, as well as the extension of its application to other related algebraic problems. New theoretical and numerical properties of the ERES method are defined in this thesis by introducing the matrix representation of the Shifting operation, which is used to change the position of the elements in the rows of a matrix. This important theoretical result opens the way for a new algebraic representation of the GCD of a set polynomials, the remainder, and the quotient of Euclid's division for two polynomials based on ERES operations. The principles of the ERES methodology provide the means to develop numerical algorithms for the GCD and LCM of polynomials that inherently have the potential to efficiently work with sets of several polynomials with inexactly known coefficients. The present new implementation of the ERES method, referred to as the ``Hybrid ERES Algorithm", is based on the effective combination of symbolic-numeric arithmetic (hybrid arithmetic) and shows interesting computational properties concerning the approximate GCD and LCM problems. The evaluation of the quality, or ``strength", of an approximate GCD is equivalent to an evaluation of a distance problem in a projective space and it is thus reduced to an optimisation problem. An efficient implementation of an algorithm computing the strength bounds is introduced here by exploiting some of the special aspects of the respective distance problem. Furthermore, a new ERES-based method has been developed for the approximate LCM which involves a least-squares minimisation process, applied to a matrix which is formed from the remainders of Euclid's division by ERES operations. The residual from the least-squares process characterises the quality of the obtained approximate LCM. Finally, the developed framework of the ERES methodology is also applied to the representation of continued fractions to improve the stability criterion for linear systems based on the Routh-Hurwitz test
DuMux 3 – an open-source simulator for solving flow and transport problems in porous media with a focus on model coupling
Authors:
Timo Koch and Dennis Gläser and Kilian Weishaupt and Sina Ackermann and Martin Beck and Beatrix Becker and Samuel Burbulla and Holger Class and Edward Coltman and Simon Emmert and Thomas Fetzer and Christoph Grüninger and Katharina Heck and Johannes Hommel and Theresa Kurz and Melanie Lipp and Farid Mohammadi and Samuel Scherrer and Martin Schneider and Gabriele Seitz and Leopold Stadler and Martin Utz and Felix Weinhardt and Bernd Flemisc
Parallel programming using functional languages
It has been argued for many years that functional programs are well suited to parallel evaluation. This thesis investigates this claim from a programming perspective; that is, it investigates parallel programming using functional languages. The approach taken has been to determine the minimum programming which is necessary in order to write efficient parallel programs. This has been attempted without the aid of clever compile-time analyses. It is argued that parallel evaluation should be explicitly expressed, by the programmer, in programs. To do achieve this a lazy functional language is extended with parallel and sequential combinators.
The mathematical nature of functional languages means that programs can be formally derived by program transformation. To date, most work on program derivation has concerned sequential programs. In this thesis Squigol has been used to derive three parallel algorithms. Squigol is a functional calculus from program derivation, which is becoming increasingly popular. It is shown that some aspects of Squigol are suitable for parallel program derivation, while others aspects are specifically orientated towards sequential algorithm derivation.
In order to write efficient parallel programs, parallelism must be controlled. Parallelism must be controlled in order to limit storage usage, the number of tasks and the minimum size of tasks. In particular over-eager evaluation or generating excessive numbers of tasks can consume too much storage. Also, tasks can be too small to be worth evaluating in parallel. Several program techniques for parallelism control were tried. These were compared with a run-time system heuristic for parallelism control. It was discovered that the best control was effected by a combination of run-time system and programmer control of parallelism.
One of the problems with parallel programming using functional languages is that non-deterministic algorithms cannot be expressed. A bag (multiset) data type is proposed to allow a limited form of non-determinism to be expressed. Bags can be given a non-deterministic parallel implementation. However, providing the operations used to combine bag elements are associative and commutative, the result of bag operations will be deterministic. The onus is on the programmer to prove this, but usually this is not difficult. Also bags' insensitivity to ordering means that more transformations are directly applicable than if, say, lists were used instead.
It is necessary to be able to reason about and measure the performance of parallel programs. For example, sometimes algorithms which seem intuitively to be good parallel ones, are not. For some higher order functions it is posible to devise parameterised formulae describing their performance. This is done for divide and conquer functions, which enables constraints to be formulated which guarantee that they have a good performance. Pipelined parallelism is difficult to analyse. Therefore a formal semantics for calculating the performance of pipelined programs is devised. This is used to analyse the performance of a pipelined Quicksort. By treating the performance semantics as a set of transformation rules, the simulation of parallel programs may be achieved by transforming programs. Some parallel programs perform poorly due to programming errors. A pragmatic method of debugging such programming errors is illustrated by some examples
Artificial Intelligence Techniques for Automatic Reformulation and Solution of Structured Mathematical Models
Complex, hierarchical, multi-scale industrial and natural systems generate increasingly large mathematical models.
Practitioners are usually able to formulate such models in their "natural" form; however, solving them often
requires finding an appropriate reformulation to reveal structures in the model which make it possible to
apply efficient, specialized approaches. The search for the "best" formulation of a given problem, the one which
allows the application of the solution algorithm that best exploits the available computational resources, is currently
a painstaking process which requires considerable work by highly skilled personnel. Experts in solution algorithms are
required for figuring out which (formulation, algorithm) pair is better used, considering issues like the appropriate
selection of the several obscure algorithmic parameters that each solution methods has. This process is only going to
get more complex, as current trends in computer technology dictate the necessity to develop complex parallel approaches
capable of harnessing the power of thousands of processing units, thereby adding another layer of complexity in the form
of the choice of the appropriate (parallel) architecture. All this renders the use of mathematical models exceedingly
costly and difficult for many potentially fruitful applications. The \name{} environment, proposed in this Thesis, aims
at devising a software system for automatizing the search for the best combination of (re)formulation, solution
algorithm and its parameters (comprised the computational architecture), until now a firm domain of human intervention,
to help practitioners bridging the gap between mathematical models cast in their natural form and existing solver
systems. I-DARE deals with deep and challenging issues, both from the theoretical and from an implementative viewpoint:
1) the development of a language that can be effectively used to formulate large-scale structured mathematical
models and the reformulation rules that allow to transform a formulation into a different one; 2) a core subsystem
capable of automatically reformulating the models and searching in the space of (formulations, algorithms,
configurations) able to "the best" formulation of a given problem; 3) the design of a general interface for numerical
solvers that is capable of accommodate and exploit structure information.
To achieve these goals I-DARE will propose a sound and articulated integration of different programming paradigms and
techniques like, classic Object-Oriented programing and Artificial Intelligence (Declarative Programming, Frame-Logic,
Higher-Order Logic, Machine Learning). By tackling these challenges, I-DARE may have profound, lasting and disruptive
effects on many facets of the development and deployment of mathematical models and the corresponding solution
algorithms
- …