1,471 research outputs found

    Stochastic Volatility Filtering with Intractable Likelihoods

    Full text link
    This paper is concerned with particle filtering for α\alpha-stable stochastic volatility models. The α\alpha-stable distribution provides a flexible framework for modeling asymmetry and heavy tails, which is useful when modeling financial returns. An issue with this distributional assumption is the lack of a closed form for the probability density function. To estimate the volatility of financial returns in this setting, we develop a novel auxiliary particle filter. The algorithm we develop can be easily applied to any hidden Markov model for which the likelihood function is intractable or computationally expensive. The approximate target distribution of our auxiliary filter is based on the idea of approximate Bayesian computation (ABC). ABC methods allow for inference on posterior quantities in situations when the likelihood of the underlying model is not available in closed form, but simulating samples from it is possible. The ABC auxiliary particle filter (ABC-APF) that we propose provides not only a good alternative to state estimation in stochastic volatility models, but it also improves on the existing ABC literature. It allows for more flexibility in state estimation while improving on the accuracy through better proposal distributions in cases when the optimal importance density of the filter is unavailable in closed form. We assess the performance of the ABC-APF on a simulated dataset from the α\alpha-stable stochastic volatility model and compare it to other currently existing ABC filters

    Fast antijamming timing acquisition using multilayer synchronization sequence

    No full text
    Pseudonoise (PN) sequences are widely used as preamble sequences to establish timing synchronization in military wireless communication systems. At the receiver, searching and detection techniques, such as the full parallel search (FPS) and the serial search (SS), are usually adopted to acquire correct timing position. However, the synchronization sequence has to be very long to combat jamming that reduces the signal-to-noise ratio (SNR) to an extremely low level. In this adverse scenario, the FPS scheme becomes too complex to implement, whereas the SS method suffers from the drawback of long mean acquisition time (MAT). In this paper, a fast timing acquisition method is proposed, using the multilayer synchronization sequence based on cyclical codes. Specifically, the transmitted preamble is the Kronecker product of Bose–Chaudhuri-Hocquenghem (BCH) codewords and PN sequences. At the receiver, the cyclical nature of BCH codes is exploited to test only a part of the entire sequence, resulting in shorter acquisition time. The algorithm is evaluated using the metrics of MAT and detection probability (DP). Theoretical expressions of MAT and DP are derived from the constant false-alarm rate (CFAR) criterion. Theoretical analysis and simulation results show that our proposed scheme dramatically reduces the acquisition time while achieving similar DP performance and maintaining a reasonably low real-time hardware implementation complexity, in comparison with the SS schem

    Heavy-tailed distributions in VaR calculations

    Get PDF
    The essence of the Value-at-Risk (VaR) and Expected Shortfall (ES) computations is estimation of low quantiles in the portfolio return distributions. Hence, the performance of market risk measurement methods depends on the quality of distributional assumptions on the underlying risk factors. This chapter is intended as a guide to heavy-tailed models for VaR-type calculations. We first describe stable laws and their lighter-tailed generalizations, the so-called truncated and tempered stable distributions. Next we study the class of generalized hyperbolic laws, which – like tempered stable distributions – can be classified somewhere between infinite variance stable laws and the Gaussian distribution. Then we discuss copulas, which enable us to construct a multivariate distribution function from the marginal (possibly different) distribution functions of n individual asset returns in a way that takes their dependence structure into account. This dependence structure may be no longer measured by correlation, but by other adequate functions like rank correlation, comonotonicity or tail dependence. Finally, we provide numerical examples.Heavy-tailed distribution; Stable distribution; Tempered stable distribution; Generalized hyperbolic distribution; Parameter estimation; Value-at-Risk (VaR); Expected Shortfall (ES); Copula; Filtered historical simulation (FHS);

    Libstable: Fast, Parallel and High-Precision Computation of -Stable Distributions in C/C++ and MATLAB

    Get PDF
    -stable distributions are a wide family of probability distributions used in many elds where probabilistic approaches are taken. However, the lack of closed analytical expressions is a major drawback for their application. Currently, several tools have been developed to numerically evaluate their density and distribution functions or estimate their parameters, but available solutions either do not reach su cient precision on their evaluations or are too slow for several practical purposes. Moreover, they do not take full advantage of the parallel processing capabilities of current multi-core machines. Other solutions work only on a subset of the -stable parameter space. In this paper we present a C/C++ library and a MATLAB front-end that allows fully parallelized, fast and high precision evaluation of density, distribution and quantile functions (PDF, CDF and CDF1 respectively), random variable generation and parameter estimation of -stable distributions in their whole parameter space. The library provided can be easily integrated on third party developments

    A nonlinear population Monte Carlo scheme for the Bayesian estimation of parameters of alpha-stable distributions

    Get PDF
    The class of alpha-stable distributions enjoys multiple practical applications in signal processing, finance, biology and other areas because it allows to describe interesting and complex data patterns, such as asymmetry or heavy tails, in contrast with the simpler and widely used Gaussian distribution. The density associated with a general alpha-stable distribution cannot be obtained in closed form, which hinders the process of estimating its parameters. A nonlinear population Monte Carlo (NPMC) scheme is applied in order to approximate the posterior probability distribution of the parameters of an alpha-stable random variable given a set of random realizations of the latter. The approximate posterior distribution is computed by way of an iterative algorithm and it consists of a collection of samples in the parameter space with associated nonlinearly-transformed importance weights. A numerical comparison of the main existing methods to estimate the alpha-stable parameters is provided, including the traditional frequentist techniques as well as a Markov chain Monte Carlo (MCMC) and a likelihood-free Bayesian approach. It is shown by means of computer simulations that the NPMC method outperforms the existing techniques in terms of parameter estimation error and failure rate for the whole range of values of a, including the smaller values for which most existing methods fail to work properly. Furthermore, it is shown that accurate parameter estimates can often be computed based on a low number of observations. Additionally, numerical results based on a set of real fish displacement data are providedE.K. acknowledges the support of Ministerio de Educación of Spain (Programa de Formación de Profesorado Universitario, Ref. AP2008-00469). J.M. acknowledges the partial support of Ministerio de Economía y Competitividad of Spain (program Consolider-Ingenio 2010 CSD2008-00010 COMONSENS and project COMPREHENSION TEC2012-38883-C02-01) and the Office of Naval Research Global (award no. N62909-15-1-2011). At the time of the original submission of this paper, J.M. was with the Department of Signal Theory and Communications, Universidad Carlos III de Madrid (Spain). M.A acknowledges the financial support of the Natural Sciences and Engineering Council of Canada (Discovery Grant 138680), the Coordenação de Apoioao Pessoal do Ensino Superior (grant No.1351/11-7) and the Fundação de Amparo à Pesquisado Estado do Rio de Janeiro (grant No.E-26/110.864/2012), and thanks J. Nolan for providing a free copy of the STABLE software. A.M.S. acknowledges the financial support from Conselho Nacional de Desenvolvimento Científico e Tecnológico(grant No. 308016/2014-9) and Coordenação de Apoio ao Pessoal do Ensino Superior, DGU Program (grant No. 257/12)

    Ray Tracing Simulations of Weak Lensing by Large-Scale Structure

    Get PDF
    We investigate weak lensing by large-scale structure using ray tracing through N-body simulations. Photon trajectories are followed through high resolution simulations of structure formation to make simulated maps of shear and convergence on the sky. Tests with varying numerical parameters are used to calibrate the accuracy of computed lensing statistics on angular scales from about 1 arcminute to a few degrees. Various aspects of the weak lensing approximation are also tested. For fields a few degrees on a side the shear power spectrum is almost entirely in the nonlinear regime and agrees well with nonlinear analytical predictions. Sampling fluctuations in power spectrum estimates are investigated by comparing several ray tracing realizations of a given model. For survey areas smaller than a degree on a side the main source of scatter is nonlinear coupling to modes larger than the survey. We develop a method which uses this effect to estimate the mass density parameter Omega from the scatter in power spectrum estimates for subregions of a larger survey. We show that the power spectrum can be measured accurately from realistically noisy data on scales corresponding to 1-10 Mpc/h. Non-Gaussian features in the one point distribution function of the weak lensing convergence (reconstructed from the shear) are also sensitive to Omega. We suggest several techniques for estimating Omega in the presence of noise and compare their statistical power, robustness and simplicity. With realistic noise Omega can be determined to within 0.1-0.2 from a deep survey of several square degrees.Comment: 59 pages, 22 figures included. Matches version accepted for Ap

    Stochastic time-changed Lévy processes with their implementation

    Get PDF
    Includes bibliographical references.We focus on the implementation details for Lévy processes and their extension to stochastic volatility models for pricing European vanilla options and exotic options. We calibrated five models to European options on the S&P500 and used the calibrated models to price a cliquet option using Monte Carlo simulation. We provide the algorithms required to value the options when using Lévy processes. We found that these models were able to closely reproduce the market option prices for many strikes and maturities. We also found that the models we studied produced different prices for the cliquet option even though all the models produced the same prices for vanilla options. This highlighted a feature of model uncertainty when valuing a cliquet option. Further research is required to develop tools to understand and manage this model uncertainty. We make a recommendation on how to proceed with this research by studying the cliquet option’s sensitivity to the model parameters
    corecore