888 research outputs found

    Incorporating statistical model error into the calculation of acceptability prices of contingent claims

    Get PDF
    The determination of acceptability prices of contingent claims requires the choice of a stochastic model for the underlying asset price dynamics. Given this model, optimal bid and ask prices can be found by stochastic optimization. However, the model for the underlying asset price process is typically based on data and found by a statistical estimation procedure. We define a confidence set of possible estimated models by a nonparametric neighborhood of a baseline model. This neighborhood serves as ambiguity set for a multi-stage stochastic optimization problem under model uncertainty. We obtain distributionally robust solutions of the acceptability pricing problem and derive the dual problem formulation. Moreover, we prove a general large deviations result for the nested distance, which allows to relate the bid and ask prices under model ambiguity to the quality of the observed data.Comment: 27 pages, 2 figure

    Calibration of Distributionally Robust Empirical Optimization Models

    Full text link
    We study the out-of-sample properties of robust empirical optimization problems with smooth ϕ\phi-divergence penalties and smooth concave objective functions, and develop a theory for data-driven calibration of the non-negative "robustness parameter" δ\delta that controls the size of the deviations from the nominal model. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of ``little bit of robustness" (i.e., δ\delta small, positive) is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that substantial variance (sensitivity) reduction is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Our examples show that robust solutions resulting from "open loop" calibration methods (e.g., selecting a 90%90\% confidence level regardless of the data and objective function) can be very conservative out-of-sample, while those corresponding to the robustness parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance are often insufficiently robust.Comment: 51 page
    corecore