1,858 research outputs found
Number of paths versus number of basis functions in American option pricing
An American option grants the holder the right to select the time at which to
exercise the option, so pricing an American option entails solving an optimal
stopping problem. Difficulties in applying standard numerical methods to
complex pricing problems have motivated the development of techniques that
combine Monte Carlo simulation with dynamic programming. One class of methods
approximates the option value at each time using a linear combination of basis
functions, and combines Monte Carlo with backward induction to estimate optimal
coefficients in each approximation. We analyze the convergence of such a method
as both the number of basis functions and the number of simulated paths
increase. We get explicit results when the basis functions are polynomials and
the underlying process is either Brownian motion or geometric Brownian motion.
We show that the number of paths required for worst-case convergence grows
exponentially in the degree of the approximating polynomials in the case of
Brownian motion and faster in the case of geometric Brownian motion.Comment: Published at http://dx.doi.org/10.1214/105051604000000846 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
A Conversation with Chris Heyde
Born in Sydney, Australia, on April 20, 1939, Chris Heyde shifted his
interest from sport to mathematics thanks to inspiration from a schoolteacher.
After earning an M.Sc. degree from the University of Sydney and a Ph.D. from
the Australian National University (ANU), he began his academic career in the
United States at Michigan State University, and then in the United Kingdom at
the University of Sheffield and the University of Manchester. In 1968, Chris
moved back to Australia to teach at ANU until 1975, when he joined CSIRO, where
he was Acting Chief of the Division of Mathematics and Statistics. From 1983 to
1986, he was a Professor and Chairman of the Department of Statistics at the
University of Melbourne. Chris then returned to ANU to become the Head of the
Statistics Department, and later the Foundation Dean of the School of
Mathematical Sciences (now the Mathematical Sciences Institute). Since 1993, he
has also spent one semester each year teaching at the Department of Statistics,
Columbia University, and has been the director of the Center for Applied
Probability at Columbia University since its creation in 1993. Chris has been
honored worldwide for his contributions in probability, statistics and the
history of statistics. He is a Fellow of the International Statistical
Institute and the Institute of Mathematical Statistics, and he is one of three
people to be a member of both the Australian Academy of Science and the
Australian Academy of Social Sciences. In 2003, he received the Order of
Australia from the Australian government. He has been awarded the Pitman Medal
and the Hannan Medal. Chris was conferred a D.Sc. honoris causa by University
of Sydney in 1998. Chris has been very active in serving the statistical
community, including as the Vice President of the International Statistical
Institute, President of the Bernoulli Society and Vice President of the
Australian Mathematical Society. He has served on numerous editorial boards,
most notably as Editor of Stochastic Processes and Their Applications from 1983
to 1989, and as Editor-in-Chief of Journal of Applied Probability and Advances
in Applied Probability since 1990.Comment: Published at http://dx.doi.org/10.1214/088342306000000088 in the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Documentación de experiencias de una Práctica Educativa Abierta (PEA) en un curso de educación superior
La presente investigación se desarrolló en el marco del Seminario virtual para formadores en el tema del Movimiento Educativo Abierto de la Comunidad Latinoamericana Abierta Regional de Investigación Social y Educativa (CLARISE). El objetivo fue adoptar Recursos Educativos Abiertos (REA) para identificar cuáles son los beneficios en los alumnos tras la adopción de los mismos en sus prácticas educativas en un curso de nivel superior y en la modalidad a distancia. Los resultados dan cuenta de similitudes en la expresión de beneficios entre los alumnos, incremento en el interés de los alumnos por las temáticas de estudio así como una motivación percibida entre los mism
Valuing the Treasury's Capital Assistance Program
The Capital Assistance Program (CAP) was created by the U.S. government in February 2009 to provide backup capital to large financial institutions unable to raise sufficient capital from private investors. Under the terms of the CAP, a participating bank receives contingent capital by issuing preferred shares to the Treasury combined with embedded options for both parties: the bank gets the option to redeem the shares or convert them to common equity, with conversion mandatory after seven years; the Treasury earns dividends on the preferred shares and gets warrants on the bank's common equity. We develop a contingent claims framework in which to estimate market values of these CAP securities. The interaction between the competing options held by the buyer and issuer of these securities creates a game between the two parties, and our approach captures this strategic element of the joint valuation problem and clarifies the incentives it creates. We apply our method to the eighteen publicly held bank holding companies that participated in the Supervisory Capital Assessment Program (the stress test) launched together with the CAP. On average, we estimate that, compared to a market transaction, the CAP securities carry a net value of approximately 30 percent of the capital invested for a bank participating to the maximum extent allowed under the terms of the program. We also find that the net value varies widely across banks. We compare our estimates with abnormal stock price returns for the stress test banks at the time the terms of the CAP announced; we find correlations between 0.78 and 0.85, depending on the precise choice of period and set of banks included. These results suggest that our valuation aligns with shareholders' perception of the value of the program, prompting questions about industry reactions and the overall impact of the program
Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations
We consider the problem of efficient simulation estimation of the density
function at the tails, and the probability of large deviations for a sum of
independent, identically distributed, light-tailed and non-lattice random
vectors. The latter problem besides being of independent interest, also forms a
building block for more complex rare event problems that arise, for instance,
in queueing and financial credit risk modelling. It has been extensively
studied in literature where state independent exponential twisting based
importance sampling has been shown to be asymptotically efficient and a more
nuanced state dependent exponential twisting has been shown to have a stronger
bounded relative error property. We exploit the saddle-point based
representations that exist for these rare quantities, which rely on inverting
the characteristic functions of the underlying random vectors. These
representations reduce the rare event estimation problem to evaluating certain
integrals, which may via importance sampling be represented as expectations.
Further, it is easy to identify and approximate the zero-variance importance
sampling distribution to estimate these integrals. We identify such importance
sampling measures and show that they possess the asymptotically vanishing
relative error property that is stronger than the bounded relative error
property. To illustrate the broader applicability of the proposed methodology,
we extend it to similarly efficiently estimate the practically important
expected overshoot of sums of iid random variables
Efficient estimation of one-dimensional diffusion first passage time densities via Monte Carlo simulation
We propose a method for estimating first passage time densities of
one-dimensional diffusions via Monte Carlo simulation. Our approach involves a
representation of the first passage time density as expectation of a functional
of the three-dimensional Brownian bridge. As the latter process can be
simulated exactly, our method leads to almost unbiased estimators. Furthermore,
since the density is estimated directly, a convergence of order ,
where is the sample size, is achieved, the last being in sharp contrast to
the slower non-parametric rates achieved by kernel smoothing of cumulative
distribution functions.Comment: 14 pages, 2 figure
Capital allocation for credit portfolios with kernel estimators
Determining contributions by sub-portfolios or single exposures to
portfolio-wide economic capital for credit risk is an important risk
measurement task. Often economic capital is measured as Value-at-Risk (VaR) of
the portfolio loss distribution. For many of the credit portfolio risk models
used in practice, the VaR contributions then have to be estimated from Monte
Carlo samples. In the context of a partly continuous loss distribution (i.e.
continuous except for a positive point mass on zero), we investigate how to
combine kernel estimation methods with importance sampling to achieve more
efficient (i.e. less volatile) estimation of VaR contributions.Comment: 22 pages, 12 tables, 1 figure, some amendment
Minimax Number of Strata for Online Stratified Sampling given Noisy Samples
We consider the problem of online stratified sampling for Monte Carlo integration of a function given a finite budget of noisy evaluations to the function. More precisely we focus on the problem of choosing the number of strata as a function of the budget . We provide asymptotic and finite-time results on how an oracle that has access to the function would choose the partition optimally. In addition we prove a \textit{lower bound} on the learning rate for the problem of stratified Monte-Carlo. As a result, we are able to state, by improving the bound on its performance, that algorithm MC-UCB, defined in~\citep{MC-UCB}, is minimax optimal both in terms of the number of samples n and the number of strata K, up to a . This enables to deduce a minimax optimal bound on the difference between the performance of the estimate outputted by MC-UCB, and the performance of the estimate outputted by the best oracle static strategy, on the class of Hölder continuous functions, and upt to a
Systemic Risk and Default Clustering for Large Financial Systems
As it is known in the finance risk and macroeconomics literature,
risk-sharing in large portfolios may increase the probability of creation of
default clusters and of systemic risk. We review recent developments on
mathematical and computational tools for the quantification of such phenomena.
Limiting analysis such as law of large numbers and central limit theorems allow
to approximate the distribution in large systems and study quantities such as
the loss distribution in large portfolios. Large deviations analysis allow us
to study the tail of the loss distribution and to identify pathways to default
clustering. Sensitivity analysis allows to understand the most likely ways in
which different effects, such as contagion and systematic risks, combine to
lead to large default rates. Such results could give useful insights into how
to optimally safeguard against such events.Comment: in Large Deviations and Asymptotic Methods in Finance, (Editors: P.
Friz, J. Gatheral, A. Gulisashvili, A. Jacqier, J. Teichmann) , Springer
Proceedings in Mathematics and Statistics, Vol. 110 2015
Aprendizaje activo en ambientes enriquecidos con tecnología
Resumen, Índice de contenido, Índice de tablas, Índice de figuras, Introducción, 1)Planteamiento del problema, 2) Marco teórico, 3) Marco metodológico, 4) Análisis y discusión de resultados, 5) Conclusiones, Referencias, Apéndice
- …
