306,949 research outputs found
Convenient Multiple Directions of Stratification
This paper investigates the use of multiple directions of stratification as a
variance reduction technique for Monte Carlo simulations of path-dependent
options driven by Gaussian vectors. The precision of the method depends on the
choice of the directions of stratification and the allocation rule within each
strata. Several choices have been proposed but, even if they provide variance
reduction, their implementation is computationally intensive and not applicable
to realistic payoffs, in particular not to Asian options with barrier.
Moreover, all these previously published methods employ orthogonal directions
for multiple stratification. In this work we investigate the use of algorithms
producing convenient directions, generally non-orthogonal, combining a lower
computational cost with a comparable variance reduction. In addition, we study
the accuracy of optimal allocation in terms of variance reduction compared to
the Latin Hypercube Sampling. We consider the directions obtained by the Linear
Transformation and the Principal Component Analysis. We introduce a new
procedure based on the Linear Approximation of the explained variance of the
payoff using the law of total variance. In addition, we exhibit a novel
algorithm that permits to correctly generate normal vectors stratified along
non-orthogonal directions. Finally, we illustrate the efficiency of these
algorithms in the computation of the price of different path-dependent options
with and without barriers in the Black-Scholes and in the Cox-Ingersoll-Ross
markets.Comment: 21 pages, 11 table
Effective Evaluation using Logged Bandit Feedback from Multiple Loggers
Accurately evaluating new policies (e.g. ad-placement models, ranking
functions, recommendation functions) is one of the key prerequisites for
improving interactive systems. While the conventional approach to evaluation
relies on online A/B tests, recent work has shown that counterfactual
estimators can provide an inexpensive and fast alternative, since they can be
applied offline using log data that was collected from a different policy
fielded in the past. In this paper, we address the question of how to estimate
the performance of a new target policy when we have log data from multiple
historic policies. This question is of great relevance in practice, since
policies get updated frequently in most online systems. We show that naively
combining data from multiple logging policies can be highly suboptimal. In
particular, we find that the standard Inverse Propensity Score (IPS) estimator
suffers especially when logging and target policies diverge -- to a point where
throwing away data improves the variance of the estimator. We therefore propose
two alternative estimators which we characterize theoretically and compare
experimentally. We find that the new estimators can provide substantially
improved estimation accuracy.Comment: KDD 201
Optimal Exploitation of the Sentinel-2 Spectral Capabilities for Crop Leaf Area Index Mapping
The continuously increasing demand of accurate quantitative high quality information on land surface properties will be faced by a new generation of environmental Earth observation (EO) missions. One current example, associated with a high potential to contribute to those demands, is the multi-spectral ESA Sentinel-2 (S2) system. The present study focuses on the evaluation of spectral information content needed for crop leaf area index (LAI) mapping in view of the future sensors. Data from a field campaign were used to determine the optimal spectral sampling from available S2 bands applying inversion of a radiative transfer model (PROSAIL) with look-up table (LUT) and artificial neural network (ANN) approaches. Overall LAI estimation performance of the proposed LUT approach (LUTNâ
â) was comparable in terms of retrieval performances with a tested and approved ANN method. Employing seven- and eight-band combinations, the LUTNâ
â approach obtained LAI RMSE of 0.53 and normalized LAI RMSE of 0.12, which was comparable to the results of the ANN. However, the LUTN50 method showed a higher robustness and insensitivity to different band settings. Most frequently selected wavebands were located in near infrared and red edge spectral regions. In conclusion, our results emphasize the potential benefits of the Sentinel-2 mission for agricultural applications
Metamodel-based importance sampling for structural reliability analysis
Structural reliability methods aim at computing the probability of failure of
systems with respect to some prescribed performance functions. In modern
engineering such functions usually resort to running an expensive-to-evaluate
computational model (e.g. a finite element model). In this respect simulation
methods, which may require runs cannot be used directly. Surrogate
models such as quadratic response surfaces, polynomial chaos expansions or
kriging (which are built from a limited number of runs of the original model)
are then introduced as a substitute of the original model to cope with the
computational cost. In practice it is almost impossible to quantify the error
made by this substitution though. In this paper we propose to use a kriging
surrogate of the performance function as a means to build a quasi-optimal
importance sampling density. The probability of failure is eventually obtained
as the product of an augmented probability computed by substituting the
meta-model for the original performance function and a correction term which
ensures that there is no bias in the estimation even if the meta-model is not
fully accurate. The approach is applied to analytical and finite element
reliability problems and proves efficient up to 100 random variables.Comment: 20 pages, 7 figures, 2 tables. Preprint submitted to Probabilistic
Engineering Mechanic
Importance Tempering
Simulated tempering (ST) is an established Markov chain Monte Carlo (MCMC)
method for sampling from a multimodal density . Typically, ST
involves introducing an auxiliary variable taking values in a finite subset
of and indexing a set of tempered distributions, say . In this case, small values of encourage better
mixing, but samples from are only obtained when the joint chain for
reaches . However, the entire chain can be used to estimate
expectations under of functions of interest, provided that importance
sampling (IS) weights are calculated. Unfortunately this method, which we call
importance tempering (IT), can disappoint. This is partly because the most
immediately obvious implementation is na\"ive and can lead to high variance
estimators. We derive a new optimal method for combining multiple IS estimators
and prove that the resulting estimator has a highly desirable property related
to the notion of effective sample size. We briefly report on the success of the
optimal combination in two modelling scenarios requiring reversible-jump MCMC,
where the na\"ive approach fails.Comment: 16 pages, 2 tables, significantly shortened from version 4 in
response to referee comments, to appear in Statistics and Computin
- âŠ