6,603 research outputs found

    Tailoring a coherent control solution landscape by linear transforms of spectral phase basis

    Get PDF
    Finding an optimal phase pattern in a multidimensional solution landscape becomes easier and faster if local optima are suppressed and contour lines are tailored towards closed convex patterns. Using wideband second harmonic generation as a coherent control test case, we show that a linear combination of spectral phase basis functions can result in such improvements and also in separable phase terms, each of which can be found independently. The improved shapes are attributed to a suppressed nonlinear shear, changing the relative orientation of contour lines. The first order approximation of the process shows a simple relation between input and output phase profiles, useful for pulse shaping at ultraviolet wavelengths

    Orthogonal Polynomial Approximation in Higher Dimensions: Applications in Astrodynamics

    Get PDF
    We propose novel methods to utilize orthogonal polynomial approximation in higher dimension spaces, which enable us to modify classical differential equation solvers to perform high precision, long-term orbit propagation. These methods have immediate application to efficient propagation of catalogs of Resident Space Objects (RSOs) and improved accounting for the uncertainty in the ephemeris of these objects. More fundamentally, the methodology promises to be of broad utility in solving initial and two point boundary value problems from a wide class of mathematical representations of problems arising in engineering, optimal control, physical sciences and applied mathematics. We unify and extend classical results from function approximation theory and consider their utility in astrodynamics. Least square approximation, using the classical Chebyshev polynomials as basis functions, is reviewed for discrete samples of the to-be-approximated function. We extend the orthogonal approximation ideas to n-dimensions in a novel way, through the use of array algebra and Kronecker operations. Approximation of test functions illustrates the resulting algorithms and provides insight into the errors of approximation, as well as the associated errors arising when the approximations are differentiated or integrated. Two sets of applications are considered that are challenges in astrodynamics. The first application addresses local approximation of high degree and order geopotential models, replacing the global spherical harmonic series by a family of locally precise orthogonal polynomial approximations for efficient computation. A method is introduced which adapts the approximation degree radially, compatible with the truth that the highest degree approximations (to ensure maximum acceleration error < 10^−9ms^−2, globally) are required near the Earths surface, whereas lower degree approximations are required as radius increases. We show that a four order of magnitude speedup is feasible, with both speed and storage efficiency op- timized using radial adaptation. The second class of problems addressed includes orbit propagation and solution of associated boundary value problems. The successive Chebyshev-Picard path approximation method is shown well-suited to solving these problems with over an order of magnitude speedup relative to known methods. Furthermore, the approach is parallel-structured so that it is suited for parallel implementation and further speedups. Used in conjunction with orthogonal Finite Element Model (FEM) gravity approximations, the Chebyshev-Picard path approximation enables truly revolutionary speedups in orbit propagation without accuracy loss

    Non-Cartesian distributed approximating functional

    Get PDF
    Presented in this work is the Non-Cartesian Distributed Approximation Functional (NCDAF). It is a multi-dimensional generalization of the (one-dimensional) Hermite DAF that is non-separable and isotropic. Demonstrated here is an approximation method based on the NCDAF that can construct a continuous approximation of a function and derivatives from a discrete sampling of points. Under appropriate choice of conditions this approximation is free from artifacts originating from (1) the sampling scheme, and (2) the orientation of the sampled data. The NCDAF is also viewed as a compromise between the minimum uncertainty state (i.e. Gaussian) and the ideal filter. The NCDAF (kernel) is shown (1) to have a very small uncertainty product, (2) to be infinitely smooth, (3) to possess the same set of invariances as the minimum uncertainty state, (4) to propagates in convenient closed form under quantum mechanical free propagation, and (5) can be made arbitrarily close to the ideal filter

    Functional Regression

    Full text link
    Functional data analysis (FDA) involves the analysis of data whose ideal units of observation are functions defined on some continuous domain, and the observed data consist of a sample of functions taken from some population, sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the development of this field, which has accelerated in the past 10 years to become one of the fastest growing areas of statistics, fueled by the growing number of applications yielding this type of data. One unique characteristic of FDA is the need to combine information both across and within functions, which Ramsay and Silverman called replication and regularization, respectively. This article will focus on functional regression, the area of FDA that has received the most attention in applications and methodological development. First will be an introduction to basis functions, key building blocks for regularization in functional regression methods, followed by an overview of functional regression methods, split into three types: [1] functional predictor regression (scalar-on-function), [2] functional response regression (function-on-scalar) and [3] function-on-function regression. For each, the role of replication and regularization will be discussed and the methodological development described in a roughly chronological manner, at times deviating from the historical timeline to group together similar methods. The primary focus is on modeling and methodology, highlighting the modeling structures that have been developed and the various regularization approaches employed. At the end is a brief discussion describing potential areas of future development in this field

    SU(2)SU(2)-particle sigma model: Momentum-space quantization of a particle on the sphere S3S^3

    Get PDF
    We perform the momentum-space quantization of a spin-less particle moving on the SU(2)SU(2) group manifold, that is, the three-dimensional sphere S3S^{3}, by using a non-canonical method entirely based on symmetry grounds. To achieve this task, non-standard (contact) symmetries are required as already shown in a previous article where the configuration-space quantization was given. The Hilbert space in the momentum space representation turns out to be made of a subset of (oscillatory) solutions of the Helmholtz equation in four dimensions. The most relevant result is the fact that both the scalar product and the generalized Fourier transform between configuration and momentum spaces deviate notably from the naively expected expressions, the former exhibiting now a non-trivial kernel, under a double integral, traced back to the non-trivial topology of the phase space, even though the momentum space as such is flat. In addition, momentum space itself appears directly as the carrier space of an irreducible representation of the symmetry group, and the Fourier transform as the unitary equivalence between two unitary irreducible representations.Comment: 29 pages, 3 figure

    Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge

    Get PDF
    In the context of complex applications from engineering sciences the solution of identification problems still poses a fundamental challenge. In terms of Uncertainty Quantification (UQ), the identification problem can be stated as a separation task for structural model and parameter uncertainty. This thesis provides new insights and methods to tackle this challenge and demonstrates these developments on an industrial benchmark use case combining simulation and real-world measurement data. While significant progress has been made in development of methods for model parameter inference, still most of those methods operate under the assumption of a perfect model. For a full, unbiased quantification of uncertainties in inverse problems, it is crucial to consider all uncertainty sources. The present work develops methods for inference of deterministic and aleatoric model parameters from noisy measurement data with explicit consideration of model discrepancy and additional quantification of the associated uncertainties using a Bayesian approach. A further important ingredient is surrogate modeling with Polynomial Chaos Expansion (PCE), enabling sampling from Bayesian posterior distributions with complex simulation models. Based on this, a novel identification strategy for separation of different sources of uncertainty is presented. Discrepancy is approximated by orthogonal functions with iterative determination of optimal model complexity, weakening the problem inherent identifiability problems. The model discrepancy quantification is complemented with studies to statistical approximate numerical approximation error. Additionally, strategies for approximation of aleatoric parameter distributions via hierarchical surrogate-based sampling are developed. The proposed method based on Approximate Bayesian Computation (ABC) with summary statistics estimates the posterior computationally efficient, in particular for large data. Furthermore, the combination with divergence-based subset selection provides a novel methodology for UQ in stochastic inverse problems inferring both, model discrepancy and aleatoric parameter distributions. Detailed analysis in numerical experiments and successful application to the challenging industrial benchmark problem -- an electric motor test bench -- validates the proposed methods

    TEMPO2, a new pulsar timing package. I: Overview

    Full text link
    Contemporary pulsar timing experiments have reached a sensitivity level where systematic errors introduced by existing analysis procedures are limiting the achievable science. We have developed tempo2, a new pulsar timing package that contains propagation and other relevant effects implemented at the 1ns level of precision (a factor of ~100 more precise than previously obtainable). In contrast with earlier timing packages, tempo2 is compliant with the general relativistic framework of the IAU 1991 and 2000 resolutions and hence uses the International Celestial Reference System, Barycentric Coordinate Time and up-to-date precession, nutation and polar motion models. Tempo2 provides a generic and extensible set of tools to aid in the analysis and visualisation of pulsar timing data. We provide an overview of the timing model, its accuracy and differences relative to earlier work. We also present a new scheme for predictive use of the timing model that removes existing processing artifacts by properly modelling the frequency dependence of pulse phase.Comment: Accepted by MNRA
    • …
    corecore