6,457 research outputs found

    Applying Schwarzschild's orbit superposition method to barred or non-barred disc galaxies

    Full text link
    We present an implementation of the Schwarzschild orbit superposition method which can be used for constructing self-consistent equilibrium models of barred or non-barred disc galaxies, or of elliptical galaxies with figure rotation. This is a further development of the publicly available code SMILE; its main improvements include a new efficient representation of an arbitrary gravitational potential using two-dimensional spline interpolation of Fourier coefficients in the meridional plane, as well as the ability to deal with rotation of the density profile and with multicomponent mass models. We compare several published methods for constructing composite axisymmetric disc--bulge--halo models and demonstrate that our code produces the models that are closest to equilibrium. We also apply it to create models of triaxial elliptical galaxies with cuspy density profiles and figure rotation, and find that such models can be found and are stable over many dynamical times in a wide range of pattern speeds and angular momenta, covering both slow- and fast-rotator classes. We then attempt to create models of strongly barred disc galaxies, using an analytic three-component potential, and find that it is not possible to make a stable dynamically self-consistent model for this density profile. Finally, we take snapshots of two N-body simulations of barred disc galaxies embedded in nearly-spherical haloes, and construct equilibrium models using only information on the density profile of the snapshots. We demonstrate that such reconstructed models are in near-stationary state, in contrast with the original N-body simulations, one of which displayed significant secular evolution.Comment: 15 pages, 9 figures; MNRAS, 450, 2842. The software is available at http://td.lpi.ru/~eugvas/smile

    An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE

    Get PDF
    The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding

    Data Driven Surrogate Based Optimization in the Problem Solving Environment WBCSim

    Get PDF
    Large scale, multidisciplinary, engineering designs are always difficult due to the complexity and dimensionality of these problems. Direct coupling between the analysis codes and the optimization routines can be prohibitively time consuming due to the complexity of the underlying simulation codes. One way of tackling this problem is by constructing computationally cheap(er) approximations of the expensive simulations, that mimic the behavior of the simulation model as closely as possible. This paper presents a data driven, surrogate based optimization algorithm that uses a trust region based sequential approximate optimization (SAO) framework and a statistical sampling approach based on design of experiment (DOE) arrays. The algorithm is implemented using techniques from two packages—SURFPACK and SHEPPACK that provide a collection of approximation algorithms to build the surrogates and three different DOE techniques—full factorial (FF), Latin hypercube sampling (LHS), and central composite design (CCD)—are used to train the surrogates. The results are compared with the optimization results obtained by directly coupling an optimizer with the simulation code. The biggest concern in using the SAO framework based on statistical sampling is the generation of the required database. As the number of design variables grows, the computational cost of generating the required database grows rapidly. A data driven approach is proposed to tackle this situation, where the trick is to run the expensive simulation if and only if a nearby data point does not exist in the cumulatively growing database. Over time the database matures and is enriched as more and more optimizations are performed. Results show that the proposed methodology dramatically reduces the total number of calls to the expensive simulation runs during the optimization process

    Random Models in Nonlinear Optimization

    Get PDF
    In recent years, there has been a tremendous increase in the interest of applying techniques of deterministic optimization to stochastic settings, largely motivated by problems that come from machine learning domains. A natural question that arises in light of this interest is the extent to which iterative algorithms designed for deterministic (nonlinear, possibly non-convex) optimization must be modified in order to properly make use of inherently random information about a problem.This thesis is concerned with exactly this question, and adapts the model-based trust-region framework of derivative-free optimization (DFO) for use in situations where objective function values or the set of points selected by an algorithm to be objectively evaluated are random.In the first part of this thesis, we consider an algorithmic framework called STORM (STochastic Optimization with Random Models), which as an iterative method, is essentially identical to model-based trust-region methods for smooth DFO. However, by imposing fairly general probabilistic conditions related to the concept of fully-linearity on objective function models and objective function estimates, we prove that iterates of algorithms in the STORM framework exhibit almost sure convergence to first-order stationary points for a broad class of unconstrained stochastic functions. We then show that algorithms in the STORM framework enjoy the canonical rate of convergence for unconstrained non-convex optimization. Throughout the thesis, examples are provided demonstrating how the mentioned probabilistic conditions might be satisfied through particular choices of model-building and function value estimation.In the second part of the thesis, we consider a framework called manifold sampling, intended for unconstrained DFO problems where the objective is nonsmooth, but enough is known a priori about the structure of the nonsmoothness that one can classify a given queried point as belonging to a certain smooth manifold of the objective surface. We particularly examine the case of sums of absolute values of (non-convex) black-box functions. Although we assume in this work that the individual black-box functions can be deterministically evaluated, we consider a variant of manifold sampling wherein random queries are made in each iteration to enhance the algorithm\u27s ``awareness of the diversity of manifolds in a neighborhood of a current iterate. We then combine the ideas of STORM and manifold sampling to yield a practical algorithm intended for non-convex â„“1\ell_1-regularized empirical risk minimization

    MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization

    Full text link
    Composite convex optimization models arise in several applications, and are especially prevalent in inverse problems with a sparsity inducing norm and in general convex optimization with simple constraints. The most widely used algorithms for convex composite models are accelerated first order methods, however they can take a large number of iterations to compute an acceptable solution for large-scale problems. In this paper we propose to speed up first order methods by taking advantage of the structure present in many applications and in image processing in particular. Our method is based on multi-level optimization methods and exploits the fact that many applications that give rise to large scale models can be modelled using varying degrees of fidelity. We use Nesterov's acceleration techniques together with the multi-level approach to achieve O(1/ϵ)\mathcal{O}(1/\sqrt{\epsilon}) convergence rate, where ϵ\epsilon denotes the desired accuracy. The proposed method has a better convergence rate than any other existing multi-level method for convex problems, and in addition has the same rate as accelerated methods, which is known to be optimal for first-order methods. Moreover, as our numerical experiments show, on large-scale face recognition problems our algorithm is several times faster than the state of the art

    The Helicopter Antenna Radiation Prediction Code (HARP)

    Get PDF
    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results

    Reduced Order Techniques for Sensitivity Analysis and Design Optimization of Aerospace Systems

    Get PDF
    This work proposes a new method for using reduced order models in lieu of high fidelity analysis during the sensitivity analysis step of gradient based design optimization. The method offers a reduction in the computational cost of finite difference based sensitivity analysis in that context. The method relies on interpolating reduced order models which are based on proper orthogonal decomposition. The interpolation process is performed using radial basis functions and Grassmann manifold projection. It does not require additional high fidelity analyses to interpolate a reduced order model for new points in the design space. The interpolated models are used specifically for points in the finite difference stencil during sensitivity analysis. The proposed method is applied to an airfoil shape optimization (ASO) problem and a transport wing optimization (TWO) problem. The errors associated with the reduced order models themselves as well as the gradients calculated from them are evaluated. The effects of the method on the overall optimization path, computation times, and function counts are also examined. The ASO results indicate that the proposed scheme is a viable method for reducing the computational cost of these optimizations. They also indicate that the adaptive step is an effective method of improving interpolated gradient accuracy. The TWO results indicate that the interpolation accuracy can have a strong impact on optimization search direction
    • …
    corecore