167 research outputs found

    Model Reduction Using Semidefinite Programming

    Get PDF
    In this thesis model reduction methods for linear time invariant systems are investigated. The reduced models are computed using semidefinite programming. Two ways of imposing the stability constraint are considered. However, both approaches add a positivity constraint to the program. The input to the algorithms is a number of frequency response samples of the original model. This makes the computational complexity relatively low for large-scale models. Extra properties on a reduced model can also be enforced, as long as the properties can be expressed as convex conditions. Semidefinite program are solved using the interior point methods which are well developed, making the implementation simpler. A number of extensions to the proposed methods were studied, for example, passive model reduction, frequency-weighted model reduction. An interesting extension is reduction of parameterized linear time invariant models, i.e. models with state-space matrices dependent on parameters. It is assumed, that parameters do not depend on state variables nor time. This extension is valuable in modeling, when a set of parameters has to be chosen to fit the required specifications. A good illustration of such a problem is modeling of a spiral radio frequency inductor. The physical model depends nonlinearly on two parameters: wire width and wire separation. To chose optimally both parameters a low-order model is usually created. The inductor modeling is considered as a case study in this thesis

    Atomic Cluster Expansion without Self-Interaction

    Full text link
    The Atomic Cluster Expansion (ACE) (Drautz, Phys. Rev. B 99, 2019) has been widely applied in high energy physics, quantum mechanics and atomistic modeling to construct many-body interaction models respecting physical symmetries. Computational efficiency is achieved by allowing non-physical self-interaction terms in the model. We propose and analyze an efficient method to evaluate and parameterize an orthogonal, or, non-self-interacting cluster expansion model. We present numerical experiments demonstrating improved conditioning and more robust approximation properties than the original expansion in regression tasks both in simplified toy problems and in applications in the machine learning of interatomic potentials.Comment: Typo fix and minor changes in wording in v

    An Evaluation of multispectral earth-observing multi-aperture telescope designs for target detection and characterization

    Get PDF
    Earth-observing satellites have fundamental size and weight design limits since they must be launched into space. These limits serve to constrain the spatial resolutions that such imaging systems can achieve with traditional telescope design strategies. Segmented and sparse-aperture imaging system designs may offer solutions to this problem. Segmented and sparse-aperture designs can be viewed as competing technologies; both approaches offer solutions for achieving finer resolution imaging from space. Segmented-aperture systems offer greater fill factor, and therefore greater signal-to-noise ratio (SNR), for a given encircled diameter than their sparse aperture counterparts, though their larger segments often suffer from greater optical aberration than those of smaller, sparse designs. Regardless, the use of any multi-aperture imaging system comes at a price; their increased effective aperture size and improvement in spatial resolution are offset by a reduction in image quality due to signal loss (less photon-collecting area) and aberrations introduced by misalignments between individual sub-apertures as compared with monolithic collectors. Introducing multispectral considerations to a multi-aperture imaging system further starves the system of photons and reduces SNR in each spectral band. This work explores multispectral design considerations inherent in 9-element tri-arm sparse aperture, hexagonal-element segmented aperture, and monolithic aperture imaging systems. The primary thrust of this work is to develop an objective target detection-based metric that can be used to compare the achieved image utility of these competing multi-aperture telescope designs over a designated design parameter trade space. Characterizing complex multi-aperture system designs in this way may lead to improved assessment of programmatic risk and reward in the development of higher-resolution imaging capabilities. This method assumes that the stringent requirements for limiting the wavefront error (WFE) associated with multi-aperture imaging systems when producing imagery for visual assessment, can be relaxed when employing target detection-based metrics for evaluating system utility. Simple target detection algorithms were used to determine Receiver Operating Characteristic (ROC) curves for the various simulated multi-aperture system designs that could be used in an objective assessment of each system\u27s ability to support target detection activities. Also, a set of regressed equations was developed that allow one to predict multi-aperture system target detection performance within the bounds of the designated trade space. Suitable metrics for comparing the shapes of two individual ROC curves, such as the total area under the curve (AUC) and the sample Pearson correlation coefficient, were found to be useful tools in validating the predicted results of the trade space regression models. And lastly, some simple rules of thumb relating to multi-aperture system design were identified from the inspection of various points of equivalency between competing system designs, as determined from the comparison metrics employed. The goal of this work, the development of a process for simulating multi-aperture imaging systems and comparing them in terms of target detection tasks, was successfully accomplished. The process presented here could be tailored to the needs of any specific multi-aperture development effort and used as a tool for system design engineers

    Reduced Order and Surrogate Models for Gravitational Waves

    Full text link
    We present an introduction to some of the state of the art in reduced order and surrogate modeling in gravitational wave (GW) science. Approaches that we cover include Principal Component Analysis, Proper Orthogonal Decomposition, the Reduced Basis approach, the Empirical Interpolation Method, Reduced Order Quadratures, and Compressed Likelihood evaluations. We divide the review into three parts: representation/compression of known data, predictive models, and data analysis. The targeted audience is that one of practitioners in GW science, a field in which building predictive models and data analysis tools that are both accurate and fast to evaluate, especially when dealing with large amounts of data and intensive computations, are necessary yet can be challenging. As such, practical presentations and, sometimes, heuristic approaches are here preferred over rigor when the latter is not available. This review aims to be self-contained, within reasonable page limits, with little previous knowledge (at the undergraduate level) requirements in mathematics, scientific computing, and other disciplines. Emphasis is placed on optimality, as well as the curse of dimensionality and approaches that might have the promise of beating it. We also review most of the state of the art of GW surrogates. Some numerical algorithms, conditioning details, scalability, parallelization and other practical points are discussed. The approaches presented are to large extent non-intrusive and data-driven and can therefore be applicable to other disciplines. We close with open challenges in high dimension surrogates, which are not unique to GW science.Comment: Invited article for Living Reviews in Relativity. 93 page

    Doctor of Philosophy

    Get PDF
    dissertationPlatelet aggregation, an important part of the development of blood clots, is a complex process involving both mechanical interaction between platelets and blood, and chemical transport on and o the surfaces of those platelets. Radial Basis Function (RBF) interpolation is a meshfree method for the interpolation of multidimensional scattered data, and therefore well-suited for the development of meshfree numerical methods. This dissertation explores the use of RBF interpolation for the simulation of both the chemistry and mechanics of platelet aggregation. We rst develop a parametric RBF representation for closed platelet surfaces represented by scattered nodes in both two and three dimensions. We compare this new RBF model to Fourier models in terms of computational cost and errors in shape representation. We then augment the Immersed Boundary (IB) method, a method for uid-structure interaction, with our RBF geometric model. We apply the resultant method to a simulation of platelet aggregation, and present comparisons against the traditional IB method. We next consider a two-dimensional problem where platelets are suspended in a stationary fluid, with chemical diusion in the fluid and chemical reaction-diusion on platelet surfaces. To tackle the latter, we propose a new method based on RBF-generated nite dierences (RBF-FD) for solving partial dierential equations (PDEs) on surfaces embedded in 2D domains. To robustly tackle the former, we remove a limitation of the Augmented Forcing method (AFM), a method for solving PDEs on domains containing curved objects, using RBF-based symmetric Hermite interpolation. Next, we extend our RBF-FD method to the numerical solution of PDEs on surfaces embedded in 3D domains, proposing a new method of stabilizing RBF-FD discretizations on surfaces. We perform convergence studies and present applications motivated by biology. We conclude with a summary of the thesis research and present an overview of future research directions, including spectrally-accurate projection methods, an extension of the Regularized Stokeslet method, RBF-FD for variable-coecient diusion, and boundary conditions for RBF-FD

    Convex Identifcation of Stable Dynamical Systems

    Get PDF
    This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems

    Author index to volumes 301–400

    Get PDF
    • …
    corecore