115 research outputs found

    Institute for Computer Applications in Science and Engineering (ICASE)

    Get PDF
    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period April 1, 1983 through September 30, 1983 is summarized

    Accurate calculation of the solutions to the Thomas-Fermi equations

    Get PDF
    We obtain highly accurate solutions to the Thomas-Fermi equations for atoms and atoms in very strong magnetic fields. We apply the Pad\'e-Hankel method, numerical integration, power series with Pad\'e and Hermite-Pad\'e approximants and Chebyshev polynomials. Both the slope at origin and the location of the right boundary in the magnetic-field case are given with unprecedented accuracy

    Determining Critical Points of Handwritten Mathematical Symbols Represented as Parametric Curves

    Get PDF
    We consider the problem of computing critical points of plane curves represented in a finite orthogonal polynomial basis. This is motivated by an approach to the recognition of hand-written mathematical symbols in which the initial data is in such an orthogonal basis and it is desired to avoid ill-conditioned basis conversions. Our main contribution is to assemble the relevant mathematical tools to perform all the necessary operations in the orthogonal polynomial basis. These include implicitization, differentiation, root finding and resultant computation

    Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs

    Full text link
    This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.Comment: 31 pages, more information at http://www.ece.cmu.edu/~smar

    Activities of the Institute for Computer Applications in Science and Engineering (ICASE)

    Get PDF
    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1984 through March 31, 1985 is summarized

    The Devil's Invention: Asymptotic, Superasymptotic and Hyperasymptotic Series

    Full text link
    Singular perturbation methods, such as the method of multiple scales and the method of matched asymptotic expansions, give series in a small parameter ε which are asymptotic but (usually) divergent. In this survey, we use a plethora of examples to illustrate the cause of the divergence, and explain how this knowledge can be exploited to generate a 'hyperasymptotic' approximation. This adds a second asymptotic expansion, with different scaling assumptions about the size of various terms in the problem, to achieve a minimum error much smaller than the best possible with the original asymptotic series. (This rescale-and-add process can be repeated further.) Weakly nonlocal solitary waves are used as an illustration.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41670/1/10440_2004_Article_193995.pd

    Summary of research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science

    Get PDF
    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1988 through March 31, 1989 is summarized

    Large Scale Constrained Trajectory Optimization Using Indirect Methods

    Get PDF
    State-of-the-art direct and indirect methods face significant challenges when solving large scale constrained trajectory optimization problems. Two main challenges when using indirect methods to solve such problems are difficulties in handling path inequality constraints, and the exponential increase in computation time as the number of states and constraints in problem increases. The latter challenge affects both direct and indirect methods. A methodology called the Integrated Control Regularization Method (ICRM) is developed for incorporating path constraints into optimal control problems when using indirect methods. ICRM removes the need for multiple constrained and unconstrained arcs and converts constrained optimal control problems into two-point boundary value problems. Furthermore, it also addresses the issue of transcendental control law equations by re-formulating the problem so that it can be solved by existing numerical solvers for two-point boundary value problems (TPBVP). The capabilities of ICRM are demonstrated by using it to solve some representative constrained trajectory optimization problems as well as a five vehicle problem with path constraints. Regularizing path constraints using ICRM represents a first step towards obtaining high quality solutions for highly constrained trajectory optimization problems which would generally be considered practically impossible to solve using indirect or direct methods. The Quasilinear Chebyshev Picard Iteration (QCPI) method builds on prior work and uses Chebyshev Polynomial series and the Picard Iteration combined with the Modified Quasi-linearization Algorithm. The method is developed specifically to utilize parallel computational resources for solving large TPBVPs. The capabilities of the numerical method are validated by solving some representative nonlinear optimal control problems. The performance of QCPI is benchmarked against single shooting and parallel shooting methods using a multi-vehicle optimal control problem. The results demonstrate that QCPI is capable of leveraging parallel computing architectures and can greatly benefit from implementation on highly parallel architectures such as GPUs. The capabilities of ICRM and QCPI are explored further using a five-vehicle constrained optimal control problem. The scenario models a co-operative, simultaneous engagement of two targets by five vehicles. The problem involves 3DOF dynamic models, control constraints for each vehicle and a no-fly zone path constraint. Trade studies are conducted by varying different parameters in the problem to demonstrate smooth transition between constrained and unconstrained arcs. Such transitions would be highly impractical to study using existing indirect methods. The study serves as a demonstration of the capabilities of ICRM and QCPI for solving large-scale trajectory optimization methods. An open source, indirect trajectory optimization framework is developed with the goal of being a viable contender to state-of-the-art direct solvers such as GPOPS and DIDO. The framework, named beluga, leverages ICRM and QCPI along with traditional indirect optimal control theory. In its current form, as illustrated by the various examples in this dissertation, it has made significant advances in automating the use of indirect methods for trajectory optimization. Following on the path of popular and widely used scientific software projects such as SciPy [1] and Numpy [2], beluga is released under the permissive MIT license [3]. Being an open source project allows the community to contribute freely to the framework, further expanding its capabilities and allow faster integration of new advances to the state-of-the-art

    Numerical scalar curvature deformation and a gluing construction

    Get PDF
    In this work a new numerical technique to prepare Cauchy data for the initial value problem (IVP) formulation of Einstein's field equations (EFE) is presented. Our method is directly inspired by the exterior asymptotic gluing (EAG) result of Corvino (2000). The argument assumes a moment in time symmetry and allows for a composite, initial data set to be assembled from (a finite subdomain of) a known asymptotically Euclidean initial data set which is glued (in a controlled manner) over a compact spatial region to an exterior Schwarzschildean representative. We demonstrate how (Corvino, 2000) may be directly adapted to a numerical scheme and under the assumption of axisymmetry construct composite Hamiltonian constraint satisfying initial data featuring internal binary black holes (BBH) glued to exterior Schwarzschild initial data in isotropic form. The generality of the method is shown in a comparison of properties of EAG composite initial data sets featuring internal BBHs as modelled by Brill-Lindquist and Misner data. The underlying geometric analysis character of gluing methods requires work within suitably weighted function spaces, which, together with a technical impediment preventing (Corvino, 2000) from being fully constructive, is the principal difficulty in devising a numerical technique. Thus the single previous attempt by Giulini and Holzegel (2005) (recently implemented by Doulis and Rinne (2016)) sought to avoid this by embedding the result within the well known Lichnerowicz-York conformal framework which required ad-hoc assumptions on solution form and a formal perturbative argument to show that EAG may proceed. In (Giulini and Holzegel, 2005) it was further claimed that judicious engineering of EAG can serve to reduce the presence of spurious gravitational radiation - unfortunately, in line with the general conclusion of (Doulis and Rinne, 2016) our numerical investigation does not appear to indicate that this is the case. Concretising the sought initial data to be specified with respect to a spatial manifold with underlying topology R×S² our method exploits a variety of pseudo-spectral (PS) techniques. A combination of the eth-formalism and spin-weighted spherical harmonics together with a novel complex-analytic based numerical approach is utilised. This is enabled by our Python 3 based numerical toolkit allowing for unified just-in-time compiled, distributed calculations with seamless extension to arbitrary precision for problems involving generic, geometric partial differential equations (PDE) as specified by tensorial expressions. Additional features include a layer of abstraction that allows for automatic reduction of indicial (i.e., tensorial) expressions together with grid remapping based on chart specification - hence straight-forward implementation of IVP formulations of the EFE such as ADM-York or ADM-York-NOR is possible. Code-base verification is performed by evolving the polarised Gowdy T³ space-time with the above formulations utilising high order, explicit time-integrators in the method of lines approach as combined with PS techniques. As the initial data we prepare has a precise (Schwarzschild) exterior this may be of interest to global evolution schemes that incorporate information from spatial-infinity. Furthermore, our approach may shed light on how more general gluing techniques could potentially be adapted for numerical work. The code-base we have developed may also be of interest in application to other problems involving geometric PDEs

    Activities of the Institute for Computer Applications in Science and Engineering

    Get PDF
    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April 1, 1985 through October 2, 1985 is summarized
    • …
    corecore