2,818 research outputs found

    Multi-Objective Parametric Query Optimization

    Get PDF
    Classical query optimization compares query plans according to one cost metric and associates each plan with a constant cost value. In this paper, we introduce the Multi-Objective Parametric Query Optimization (MPQ) problem where query plans are compared according to multiple cost metrics and the cost of a given plan according to a given metric is modeled as a function that depends on multiple parameters. The cost metrics may for instance include execution time or monetary fees; a parameter may represent the selectivity of a query predicate that is unspecified at optimization time. MPQ generalizes parametric query optimization (which allows multiple parameters but only one cost metric) and multi-objective query optimization (which allows multiple cost metrics but no parameters). We formally analyze the novel MPQ problem and show why existing algorithms are inapplicable. We present a generic algorithm for MPQ and a specialized version for MPQ with piecewise-linear plan cost functions. We prove that both algorithms find all relevant query plans and experimentally evaluate the performance of our second algorithm in a Cloud computing scenario

    Optimal web-scale tiering as a flow problem

    Get PDF
    We present a fast online solver for large scale parametric max-flow problems as they occur in portfolio optimization, inventory management, computer vision, and logistics. Our algorithm solves an integer linear program in an online fashion. It exploits total unimodularity of the constraint matrix and a Lagrangian relaxation to solve the problem as a convex online game. The algorithm generates approximate solutions of max-flow problems by performing stochastic gradient descent on a set of flows. We apply the algorithm to optimize tier arrangement of over 84 million web pages on a layered set of caches to serve an incoming query stream optimally

    B-spline techniques for volatility modeling

    Full text link
    This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. We use an extension of classical B-splines obtained by including basis functions with infinite support. We first come back to the application of shape-constrained B-splines to the estimation of conditional expectations, not merely from a scatter plot but also from the given marginal distributions. An application is the Monte Carlo calibration of stochastic local volatility models by Markov projection. Then we present a new technique for the calibration of an implied volatility surface to sparse option data. We use a B-spline parameterization of the Radon-Nikodym derivative of the underlying's risk-neutral probability density with respect to a roughly calibrated base model. We show that this method provides smooth arbitrage-free implied volatility surfaces. Finally, we sketch a Galerkin method with B-spline finite elements to the solution of the partial differential equation satisfied by the Radon-Nikodym derivative.Comment: 25 page

    Efficient Resolution of Anisotropic Structures

    Get PDF
    We highlight some recent new delevelopments concerning the sparse representation of possibly high-dimensional functions exhibiting strong anisotropic features and low regularity in isotropic Sobolev or Besov scales. Specifically, we focus on the solution of transport equations which exhibit propagation of singularities where, additionally, high-dimensionality enters when the convection field, and hence the solutions, depend on parameters varying over some compact set. Important constituents of our approach are directionally adaptive discretization concepts motivated by compactly supported shearlet systems, and well-conditioned stable variational formulations that support trial spaces with anisotropic refinements with arbitrary directionalities. We prove that they provide tight error-residual relations which are used to contrive rigorously founded adaptive refinement schemes which converge in L2L_2. Moreover, in the context of parameter dependent problems we discuss two approaches serving different purposes and working under different regularity assumptions. For frequent query problems, making essential use of the novel well-conditioned variational formulations, a new Reduced Basis Method is outlined which exhibits a certain rate-optimal performance for indefinite, unsymmetric or singularly perturbed problems. For the radiative transfer problem with scattering a sparse tensor method is presented which mitigates or even overcomes the curse of dimensionality under suitable (so far still isotropic) regularity assumptions. Numerical examples for both methods illustrate the theoretical findings

    Nonlinear Methods for Model Reduction

    Full text link
    The usual approach to model reduction for parametric partial differential equations (PDEs) is to construct a linear space VnV_n which approximates well the solution manifold M\mathcal{M} consisting of all solutions u(y)u(y) with yy the vector of parameters. This linear reduced model VnV_n is then used for various tasks such as building an online forward solver for the PDE or estimating parameters from data observations. It is well understood in other problems of numerical computation that nonlinear methods such as adaptive approximation, nn-term approximation, and certain tree-based methods may provide improved numerical efficiency. For model reduction, a nonlinear method would replace the linear space VnV_n by a nonlinear space Σn\Sigma_n. This idea has already been suggested in recent papers on model reduction where the parameter domain is decomposed into a finite number of cells and a linear space of low dimension is assigned to each cell. Up to this point, little is known in terms of performance guarantees for such a nonlinear strategy. Moreover, most numerical experiments for nonlinear model reduction use a parameter dimension of only one or two. In this work, a step is made towards a more cohesive theory for nonlinear model reduction. Framing these methods in the general setting of library approximation allows us to give a first comparison of their performance with those of standard linear approximation for any general compact set. We then turn to the study these methods for solution manifolds of parametrized elliptic PDEs. We study a very specific example of library approximation where the parameter domain is split into a finite number NN of rectangular cells and where different reduced affine spaces of dimension mm are assigned to each cell. The performance of this nonlinear procedure is analyzed from the viewpoint of accuracy of approximation versus mm and NN
    • …
    corecore