232 research outputs found

    Exact Symbolic-Numeric Computation of Planar Algebraic Curves

    Full text link
    We present a novel certified and complete algorithm to compute arrangements of real planar algebraic curves. It provides a geometric-topological analysis of the decomposition of the plane induced by a finite number of algebraic curves in terms of a cylindrical algebraic decomposition. From a high-level perspective, the overall method splits into two main subroutines, namely an algorithm denoted Bisolve to isolate the real solutions of a zero-dimensional bivariate system, and an algorithm denoted GeoTop to analyze a single algebraic curve. Compared to existing approaches based on elimination techniques, we considerably improve the corresponding lifting steps in both subroutines. As a result, generic position of the input system is never assumed, and thus our algorithm never demands for any change of coordinates. In addition, we significantly limit the types of involved exact operations, that is, we only use resultant and gcd computations as purely symbolic operations. The latter results are achieved by combining techniques from different fields such as (modular) symbolic computation, numerical analysis and algebraic geometry. We have implemented our algorithms as prototypical contributions to the C++-project CGAL. They exploit graphics hardware to expedite the symbolic computations. We have also compared our implementation with the current reference implementations, that is, LGP and Maple's Isolate for polynomial system solving, and CGAL's bivariate algebraic kernel for analyses and arrangement computations of algebraic curves. For various series of challenging instances, our exhaustive experiments show that the new implementations outperform the existing ones.Comment: 46 pages, 4 figures, submitted to Special Issue of TCS on SNC 2011. arXiv admin note: substantial text overlap with arXiv:1010.1386 and arXiv:1103.469

    Ăśber die Implementierung der verallgemeinerten Finite-Element-Methode

    Get PDF
    The Generalized Finite Element Method (GFEM) combines desirable features of the standard Finite Element Method and the meshless methods. The key difference of the GFEM compared to the traditional FEM is the construction of the ansatz space. Each node of the finite element mesh carries a number of ansatz functions, expressed in terms of the global coordinate system. Those ansatz functions are multiplied by a partition of unity and serve as element ansatz functions in the patch constituted by the elements incident at the node. Using this technique to create the ansatz space allows for arbitrary ansatz functions. C0-continuity is enforced by construction. The ansatz is enriched using analytical functions or numerical approximations derived from side calculations containing a-priori knowledge of the solution close to singularities. The performance of GFEM with a higher order of polynomial ansatz functions is compared to traditional h-, p- and hp-extensions of the FEM. Most of the efficient solvers, e.g. multi-grid or cg, cannot be applied to the semi-definite systems resulting from a GFEM discretization. Several solving strategies are evaluated for higher order GFEM. The work concludes with a description of the implementation of the GFEM with a flexible object-oriented framework using C++.Die verallgemeinerte Finite-Element-Methode (GFEM) kombiniert Vorteile der klassischen Finite-Element-Methode mit Vorteilen der netzfreien Methoden. Hauptunterschied beim Vergleich der GFEM mit der FEM ist die Konstruktions des Ansatzes. Jeder Knoten des FE-Netzes trägt eine Anzahl an Ansatzfunktionen, die in globalen Koordinaten ausgedrückt werden. Diese Ansatzfunktionen werden mit einer Partition of Unity multipliziert und dienen als Elementansatzfunktionen für den Patch, der aus den angrenzenden Elementen des Knotens gebildet wird. Durch diese Art des Ansatzes wird die C0-Stetikeit für beliebige Ansatzfunktionen gewährleistet. Der Ansatz wird mit analytischen Funktionen und numerischen Näherungsrechnungen angereichert und enthält somit a-priori Wissen der Lösung in der Nähe von Singularitäten. Die Performance der GFEM mit Ansätzen höhere Ordnung wird mit klassischen h-, p- und hp-Diskretisierungen der FEM verglichen. Die meisten effizienten Löser, z.B. Multi Grid Verfahren oder die CG-Methode, können nicht für das semi-definite Gleichungssystem verwendet werden, dass aus der GFEM-Diskretisierung resultiert. Verschiedene Lösungsstrategien für GFEM-Diskretisierungen höhere Ordnungen werden untersucht. Die Arbeit schließt mit einer Beschreibung der Implementierung der Methode in Form eines Objekt-orientierten Frameworks in C++ ab

    Sparse Modelling and Multi-exponential Analysis

    Get PDF
    The research fields of harmonic analysis, approximation theory and computer algebra are seemingly different domains and are studied by seemingly separated research communities. However, all of these are connected to each other in many ways. The connection between harmonic analysis and approximation theory is not accidental: several constructions among which wavelets and Fourier series, provide major insights into central problems in approximation theory. And the intimate connection between approximation theory and computer algebra exists even longer: polynomial interpolation is a long-studied and important problem in both symbolic and numeric computing, in the former to counter expression swell and in the latter to construct a simple data model. A common underlying problem statement in many applications is that of determining the number of components, and for each component the value of the frequency, damping factor, amplitude and phase in a multi-exponential model. It occurs, for instance, in magnetic resonance and infrared spectroscopy, vibration analysis, seismic data analysis, electronic odour recognition, keystroke recognition, nuclear science, music signal processing, transient detection, motor fault diagnosis, electrophysiology, drug clearance monitoring and glucose tolerance testing, to name just a few. The general technique of multi-exponential modeling is closely related to what is commonly known as the Padé-Laplace method in approximation theory, and the technique of sparse interpolation in the field of computer algebra. The problem statement is also solved using a stochastic perturbation method in harmonic analysis. The problem of multi-exponential modeling is an inverse problem and therefore may be severely ill-posed, depending on the relative location of the frequencies and phases. Besides the reliability of the estimated parameters, the sparsity of the multi-exponential representation has become important. A representation is called sparse if it is a combination of only a few elements instead of all available generating elements. In sparse interpolation, the aim is to determine all the parameters from only a small amount of data samples, and with a complexity proportional to the number of terms in the representation. Despite the close connections between these fields, there is a clear lack of communication in the scientific literature. The aim of this seminar is to bring researchers together from the three mentioned fields, with scientists from the varied application domains.Output Type: Meeting Repor

    Feasible Computation in Symbolic and Numeric Integration

    Get PDF
    Two central concerns in scientific computing are the reliability and efficiency of algorithms. We introduce the term feasible computation to describe algorithms that are reliable and efficient given the contextual constraints imposed in practice. The main focus of this dissertation then, is to bring greater clarity to the forms of error introduced in computation and modeling, and in the limited context of symbolic and numeric integration, to contribute to integration algorithms that better account for error while providing results efficiently. Chapter 2 considers the problem of spurious discontinuities in the symbolic integration problem, proposing a new method to restore continuity based on a pair of unwinding numbers. Computable conditions for the unwinding numbers are specified, allowing the computation of a variety of continuous integrals. Chapter 3 introduces two structure-preserving algorithms for the symbolic-numeric integration of rational functions on exact input. A structured backward and forward error analysis for the algorithms shows that they are a posteriori backward and forward stable, with both forms of error exhibiting tolerance proportionality. Chapter 4 identifies the basic logical structure of feasible inference by presenting a logical model of stable approximate inference, illustrated by examples of modeling and numerical integration. In terms of this model it is seen that a necessary condition for the feasibility of methods of abstraction in modeling and complexity reduction in computational mathematics is the preservation of inferential structure, in a sense that is made precise. Chapter 5 identifies a robust pattern in mathematical sciences of transforming problems to make solutions feasible. It is showed that computational complexity reduction methods in computational science involve chains of such transformations. It is argued that the structured and approximate nature of such strategies indicates the need for a higher-order model of computation and a new definition of computational complexity

    Modern Monte Carlo Methods and Their Application in Semiparametric Regression

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The essence of Bayesian data analysis is to ascertain posterior distributions. Posteriors generally do not have closed-form expressions for direct computation in practical applications. Analysts, therefore, resort to Markov Chain Monte Carlo (MCMC) methods for the generation of sample observations that approximate the desired posterior distribution. Standard MCMC methods simulate sample values from the desired posterior distribution via random proposals. As a result, the mechanism used to generate the proposals inevitably determines the efficiency of the algorithm. One of the modern MCMC techniques designed to explore the high-dimensional space more efficiently is Hamiltonian Monte Carlo (HMC), based on the Hamiltonian differential equations. Inspired by classical mechanics, these equations incorporate a latent variable to generate MCMC proposals that are likely to be accepted. This dissertation discusses how such a powerful computational approach can be used for implementing statistical models. Along this line, I created a unified computational procedure for using HMC to fit various types of statistical models. The procedure that I proposed can be applied to a broad class of models, including linear models, generalized linear models, mixed-effects models, and various types of semiparametric regression models. To facilitate the fitting of a diverse set of models, I incorporated new parameterization and decomposition schemes to ensure the numerical performance of Bayesian model fitting without sacrificing the procedure’s general applicability. As a concrete application, I demonstrate how to use the proposed procedure to fit a multivariate generalized additive model (GAM), a nonstandard statistical model with a complex covariance structure and numerous parameters. Byproducts of the research include two software packages that all practical data analysts to use the proposed computational method to fit their own models. The research’s main methodological contribution is the unified computational approach that it presents for Bayesian model fitting that can be used for standard and nonstandard statistical models. Availability of such a procedure has greatly enhanced statistical modelers’ toolbox for implementing new and nonstandard statistical models

    Matrix Polynomials and their Lower Rank Approximations

    Get PDF
    This thesis is a wide ranging work on computing a “lower-rank” approximation of a matrix polynomial using second-order non-linear optimization techniques. Two notions of rank are investigated. The first is the rank as the number of linearly independent rows or columns, which is the classical definition. The other notion considered is the lowest rank of a matrix polynomial when evaluated at a complex number, or the McCoy rank. Together, these two notions of rank allow one to compute a nearby matrix polynomial where the structure of both the left and right kernels is prescribed, along with the structure of both the infinite and finite eigenvalues. The computational theory of the calculus of matrix polynomial valued functions is developed and used in optimization algorithms based on second-order approximations. Special functions studied with a detailed error analysis are the determinant and adjoint of matrix polynomials. The unstructured and structured variants of matrix polynomials are studied in a very general setting in the context of an equality constrained optimization problem. The most general instances of these optimization problems are NP hard to approximate solutions to in a global setting. In most instances we are able to prove that solutions to our optimization problems exist (possibly at infinity) and discuss techniques in conjunction with an implementation to compute local minimizers to the problem. Most of the analysis of these problems is local and done through the Karush-Kuhn-Tucker optimality conditions for constrained optimization problems. We show that most formulations of the problems studied satisfy regularity conditions and admit Lagrange multipliers. Furthermore, we show that under some formulations that the second-order sufficient condition holds for instances of interest of the optimization problems in question. When Lagrange multipliers do not exist, we discuss why, and if it is reasonable to do so, how to regularize the problem. In several instances closed form expressions for the derivatives of matrix polynomial valued functions are derived to assist in analysis of the optimality conditions around a solution. From this analysis it is shown that variants of Newton’s method will have a local rate of convergence that is quadratic with a suitable initial guess for many problems. The implementations are demonstrated on some examples from the literature and several examples are cross-validated with different optimization formulations of the same mathematical problem. We conclude with a special application of the theory developed in this thesis is computing a nearby pair of differential polynomials with a non-trivial greatest common divisor, a non-commutative symbolic-numeric computation problem. We formulate this problem as finding a nearby structured matrix polynomial that is rank deficient in the classical sense

    Performance Modeling and Prediction for Dense Linear Algebra

    Full text link
    This dissertation introduces measurement-based performance modeling and prediction techniques for dense linear algebra algorithms. As a core principle, these techniques avoid executions of such algorithms entirely, and instead predict their performance through runtime estimates for the underlying compute kernels. For a variety of operations, these predictions allow to quickly select the fastest algorithm configurations from available alternatives. We consider two scenarios that cover a wide range of computations: To predict the performance of blocked algorithms, we design algorithm-independent performance models for kernel operations that are generated automatically once per platform. For various matrix operations, instantaneous predictions based on such models both accurately identify the fastest algorithm, and select a near-optimal block size. For performance predictions of BLAS-based tensor contractions, we propose cache-aware micro-benchmarks that take advantage of the highly regular structure inherent to contraction algorithms. At merely a fraction of a contraction's runtime, predictions based on such micro-benchmarks identify the fastest combination of tensor traversal and compute kernel

    NASA/ASEE Summer Faculty Fellowship Program, 1990, Volume 1

    Get PDF
    The 1990 Johnson Space Center (JSC) NASA/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program was conducted by the University of Houston-University Park and JSC. A compilation of the final reports on the research projects are presented. The topics covered include: the Space Station; the Space Shuttle; exobiology; cell biology; culture techniques; control systems design; laser induced fluorescence; spacecraft reliability analysis; reduced gravity; biotechnology; microgravity applications; regenerative life support systems; imaging techniques; cardiovascular system; physiological effects; extravehicular mobility units; mathematical models; bioreactors; computerized simulation; microgravity simulation; and dynamic structural analysis
    • …
    corecore