1,638 research outputs found

    A Conversation with Chris Heyde

    Full text link
    Born in Sydney, Australia, on April 20, 1939, Chris Heyde shifted his interest from sport to mathematics thanks to inspiration from a schoolteacher. After earning an M.Sc. degree from the University of Sydney and a Ph.D. from the Australian National University (ANU), he began his academic career in the United States at Michigan State University, and then in the United Kingdom at the University of Sheffield and the University of Manchester. In 1968, Chris moved back to Australia to teach at ANU until 1975, when he joined CSIRO, where he was Acting Chief of the Division of Mathematics and Statistics. From 1983 to 1986, he was a Professor and Chairman of the Department of Statistics at the University of Melbourne. Chris then returned to ANU to become the Head of the Statistics Department, and later the Foundation Dean of the School of Mathematical Sciences (now the Mathematical Sciences Institute). Since 1993, he has also spent one semester each year teaching at the Department of Statistics, Columbia University, and has been the director of the Center for Applied Probability at Columbia University since its creation in 1993. Chris has been honored worldwide for his contributions in probability, statistics and the history of statistics. He is a Fellow of the International Statistical Institute and the Institute of Mathematical Statistics, and he is one of three people to be a member of both the Australian Academy of Science and the Australian Academy of Social Sciences. In 2003, he received the Order of Australia from the Australian government. He has been awarded the Pitman Medal and the Hannan Medal. Chris was conferred a D.Sc. honoris causa by University of Sydney in 1998. Chris has been very active in serving the statistical community, including as the Vice President of the International Statistical Institute, President of the Bernoulli Society and Vice President of the Australian Mathematical Society. He has served on numerous editorial boards, most notably as Editor of Stochastic Processes and Their Applications from 1983 to 1989, and as Editor-in-Chief of Journal of Applied Probability and Advances in Applied Probability since 1990.Comment: Published at http://dx.doi.org/10.1214/088342306000000088 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Number of paths versus number of basis functions in American option pricing

    Full text link
    An American option grants the holder the right to select the time at which to exercise the option, so pricing an American option entails solving an optimal stopping problem. Difficulties in applying standard numerical methods to complex pricing problems have motivated the development of techniques that combine Monte Carlo simulation with dynamic programming. One class of methods approximates the option value at each time using a linear combination of basis functions, and combines Monte Carlo with backward induction to estimate optimal coefficients in each approximation. We analyze the convergence of such a method as both the number of basis functions and the number of simulated paths increase. We get explicit results when the basis functions are polynomials and the underlying process is either Brownian motion or geometric Brownian motion. We show that the number of paths required for worst-case convergence grows exponentially in the degree of the approximating polynomials in the case of Brownian motion and faster in the case of geometric Brownian motion.Comment: Published at http://dx.doi.org/10.1214/105051604000000846 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Smoking Adjoints: fast evaluation of Greeks in Monte Carlo calculations

    Get PDF
    This paper presents an adjoint method to accelerate the calculation of Greeks by Monte Carlo simulation. The method calculates price sensitivities along each path; but in contrast to a forward pathwise calculation, it works backward recursively using adjoint variables. Along each path, the forward and adjoint implementations produce the same values, but the adjoint method rearranges the calculations to generate potential computational savings. The adjoint method outperforms a forward implementation in calculating the sensitivities of a small number of outputs to a large number of inputs. This applies, for example, in estimating the sensitivities of an interest rate derivatives book to multiple points along an initial forward curve or the sensitivities of an equity derivatives book to multiple points on a volatility surface. We illustrate the application of the method in the setting of the LIBOR market model. Numerical results confirm that the computational advantage of the adjoint method grows in proportion to the number of initial forward rates

    Efficient estimation of one-dimensional diffusion first passage time densities via Monte Carlo simulation

    Full text link
    We propose a method for estimating first passage time densities of one-dimensional diffusions via Monte Carlo simulation. Our approach involves a representation of the first passage time density as expectation of a functional of the three-dimensional Brownian bridge. As the latter process can be simulated exactly, our method leads to almost unbiased estimators. Furthermore, since the density is estimated directly, a convergence of order 1/N1 / \sqrt{N}, where NN is the sample size, is achieved, the last being in sharp contrast to the slower non-parametric rates achieved by kernel smoothing of cumulative distribution functions.Comment: 14 pages, 2 figure

    Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    Get PDF
    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queueing and financial credit risk modelling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. These representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance sampling be represented as expectations. Further, it is easy to identify and approximate the zero-variance importance sampling distribution to estimate these integrals. We identify such importance sampling measures and show that they possess the asymptotically vanishing relative error property that is stronger than the bounded relative error property. To illustrate the broader applicability of the proposed methodology, we extend it to similarly efficiently estimate the practically important expected overshoot of sums of iid random variables

    Documentación de experiencias de una Práctica Educativa Abierta (PEA) en un curso de educación superior

    Full text link
    La presente investigación se desarrolló en el marco del Seminario virtual para formadores en el tema del Movimiento Educativo Abierto de la Comunidad Latinoamericana Abierta Regional de Investigación Social y Educativa (CLARISE). El objetivo fue adoptar Recursos Educativos Abiertos (REA) para identificar cuáles son los beneficios en los alumnos tras la adopción de los mismos en sus prácticas educativas en un curso de nivel superior y en la modalidad a distancia. Los resultados dan cuenta de similitudes en la expresión de beneficios entre los alumnos, incremento en el interés de los alumnos por las temáticas de estudio así como una motivación percibida entre los mism

    Linear Classifiers Under Infinite Imbalance

    Full text link
    We study the behavior of linear discriminant functions for binary classification in the infinite-imbalance limit, where the sample size of one class grows without bound while the sample size of the other remains fixed. The coefficients of the classifier minimize an expected loss specified through a weight function. We show that for a broad class of weight functions, the intercept diverges but the rest of the coefficient vector has a finite limit under infinite imbalance, extending prior work on logistic regression. The limit depends on the left tail of the weight function, for which we distinguish three cases: bounded, asymptotically polynomial, and asymptotically exponential. The limiting coefficient vectors reflect robustness or conservatism properties in the sense that they optimize against certain worst-case alternatives. In the bounded and polynomial cases, the limit is equivalent to an implicit choice of upsampling distribution for the minority class. We apply these ideas in a credit risk setting, with particular emphasis on performance in the high-sensitivity and high-specificity regions

    Contagion in financial networks

    Get PDF
    The recent financial crisis has prompted much new research on the interconnectedness of the modern financial system and the extent to which it contributes to systemic fragility. Network connections diversify firms' risk exposures, but they also create channels through which shocks can spread by contagion. We review the extensive literature on this issue, with the focus on how network structure interacts with other key variables such as leverage, size, common exposures, and short-term funding. We discuss various metrics that have been proposed for evaluating the susceptibility of the system to contagion and suggest directions for future research

    Capital allocation for credit portfolios with kernel estimators

    Full text link
    Determining contributions by sub-portfolios or single exposures to portfolio-wide economic capital for credit risk is an important risk measurement task. Often economic capital is measured as Value-at-Risk (VaR) of the portfolio loss distribution. For many of the credit portfolio risk models used in practice, the VaR contributions then have to be estimated from Monte Carlo samples. In the context of a partly continuous loss distribution (i.e. continuous except for a positive point mass on zero), we investigate how to combine kernel estimation methods with importance sampling to achieve more efficient (i.e. less volatile) estimation of VaR contributions.Comment: 22 pages, 12 tables, 1 figure, some amendment

    Aprendizaje activo en ambientes enriquecidos con tecnología

    Get PDF
    Resumen, Índice de contenido, Índice de tablas, Índice de figuras, Introducción, 1)Planteamiento del problema, 2) Marco teórico, 3) Marco metodológico, 4) Análisis y discusión de resultados, 5) Conclusiones, Referencias, Apéndice
    corecore