851 research outputs found

    B-spline techniques for volatility modeling

    Full text link
    This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. We use an extension of classical B-splines obtained by including basis functions with infinite support. We first come back to the application of shape-constrained B-splines to the estimation of conditional expectations, not merely from a scatter plot but also from the given marginal distributions. An application is the Monte Carlo calibration of stochastic local volatility models by Markov projection. Then we present a new technique for the calibration of an implied volatility surface to sparse option data. We use a B-spline parameterization of the Radon-Nikodym derivative of the underlying's risk-neutral probability density with respect to a roughly calibrated base model. We show that this method provides smooth arbitrage-free implied volatility surfaces. Finally, we sketch a Galerkin method with B-spline finite elements to the solution of the partial differential equation satisfied by the Radon-Nikodym derivative.Comment: 25 page

    Nonlinear Structural Functional Models

    Get PDF
    A common objective in functional data analyses is the registration of data curves and estimation of the locations of their salient structures, such as spikes or local extrema. Existing methods separate curve modeling and structure estimation into disjoint steps, optimize different criteria for estimation, or recast the problem into the testing framework. Moreover, curve registration is often implemented in a pre-processing step. The aim of this dissertation is to ameliorate the shortcomings of existing methods through the development of unified nonlinear modeling procedures for the analysis of structural functional data. A general model-based framework is proposed to unify registration and estimation of curves and their structures. In particular, this work focuses on three specific research problems. First, a Sparse Semiparametric Nonlinear Model (SSNM) is proposed to jointly register curves, perform model selection, and estimate the features of sparsely-structured functional data. The SSNM is fitted to chromatographic data from a study of the composition of Chinese rhubarb. Next, the SSNM is extended to the nonlinear mixed effects setting to enable the comparison of sparse structures across group-averaged curves. The model is utilized to compare compositions of medicinal herbs collected from two groups of production sites. Finally, a Piecewise Monotonic B-spline Model (PMBM) is proposed to estimate the locations of local extrema in a curve. The PMBM is applied to MRI data from a study of gray matter growth in the brain

    A Lower Bound for Splines on Tetrahedral Vertex Stars

    Get PDF
    A tetrahedral complex all of whose tetrahedra meet at a common vertex is called a vertex star. Vertex stars are a natural generalization of planar triangulations, and understanding splines on vertex stars is a crucial step to analyzing trivariate splines. It is particularly diffcult to compute the dimension of splines on vertex stars in which the vertex is completely surrounded by tetrahedra|we call theseclosed vertex stars. A formula due to Alfeld, Neamtu, and Schumaker gives the dimension of splines whose derivatives up to order r are continuous on closed vertex stars of degree at least 3r + 2. We show that this formula is a lower bound on the dimension of splines of degree at least (3r + 2)/2. Our proof uses apolarity and thevso-called Waldschmidt constant of the set of points dual to the interior faces of the vertex star. We furthermore observe that arguments of Alfeld, Schumaker, and Whiteley imply that the only splines of degree at most (3r + 1)/2 on a generic closed vertex star are global polynomials

    Spike-and-Slab Priors for Function Selection in Structured Additive Regression Models

    Full text link
    Structured additive regression provides a general framework for complex Gaussian and non-Gaussian regression models, with predictors comprising arbitrary combinations of nonlinear functions and surfaces, spatial effects, varying coefficients, random effects and further regression terms. The large flexibility of structured additive regression makes function selection a challenging and important task, aiming at (1) selecting the relevant covariates, (2) choosing an appropriate and parsimonious representation of the impact of covariates on the predictor and (3) determining the required interactions. We propose a spike-and-slab prior structure for function selection that allows to include or exclude single coefficients as well as blocks of coefficients representing specific model terms. A novel multiplicative parameter expansion is required to obtain good mixing and convergence properties in a Markov chain Monte Carlo simulation approach and is shown to induce desirable shrinkage properties. In simulation studies and with (real) benchmark classification data, we investigate sensitivity to hyperparameter settings and compare performance to competitors. The flexibility and applicability of our approach are demonstrated in an additive piecewise exponential model with time-varying effects for right-censored survival times of intensive care patients with sepsis. Geoadditive and additive mixed logit model applications are discussed in an extensive appendix

    Fast DD-classification of functional data

    Full text link
    A fast nonparametric procedure for classifying functional data is introduced. It consists of a two-step transformation of the original data plus a classifier operating on a low-dimensional hypercube. The functional data are first mapped into a finite-dimensional location-slope space and then transformed by a multivariate depth function into the DDDD-plot, which is a subset of the unit hypercube. This transformation yields a new notion of depth for functional data. Three alternative depth functions are employed for this, as well as two rules for the final classification on [0,1]q[0,1]^q. The resulting classifier has to be cross-validated over a small range of parameters only, which is restricted by a Vapnik-Cervonenkis bound. The entire methodology does not involve smoothing techniques, is completely nonparametric and allows to achieve Bayes optimality under standard distributional settings. It is robust, efficiently computable, and has been implemented in an R environment. Applicability of the new approach is demonstrated by simulations as well as a benchmark study
    corecore