196 research outputs found

    Neural Likelihoods via Cumulative Distribution Functions

    Get PDF
    We leverage neural networks as universal approximators of monotonic functions to build a parameterization of conditional cumulative distribution functions (CDFs). By the application of automatic differentiation with respect to response variables and then to parameters of this CDF representation, we are able to build black box CDF and density estimators. A suite of families is introduced as alternative constructions for the multivariate case. At one extreme, the simplest construction is a competitive density estimator against state-of-the-art deep learning methods, although it does not provide an easily computable representation of multivariate CDFs. At the other extreme, we have a flexible construction from which multivariate CDF evaluations and marginalizations can be obtained by a simple forward pass in a deep neural net, but where the computation of the likelihood scales exponentially with dimensionality. Alternatives in between the extremes are discussed. We evaluate the different representations empirically on a variety of tasks involving tail area probabilities, tail dependence and (partial) density estimation.Comment: 10 page

    Assessing the strength of directed influences among neural signals : An approach to noisy data

    Get PDF
    Acknowledgements This work was supported by the German Science Foundation (Ti315/4-2), the German Federal Ministry of Education and Research (BMBF grant 01GQ0420), and the Excellence Initiative of the German Federal and State Governments. B.S. is indebted to the Kosterlitz Centre for the financial support of this research project.Peer reviewedPreprin

    Limit theorems for nearly unstable Hawkes processes

    Full text link
    Because of their tractability and their natural interpretations in term of market quantities, Hawkes processes are nowadays widely used in high-frequency finance. However, in practice, the statistical estimation results seem to show that very often, only nearly unstable Hawkes processes are able to fit the data properly. By nearly unstable, we mean that the L1L^1 norm of their kernel is close to unity. We study in this work such processes for which the stability condition is almost violated. Our main result states that after suitable rescaling, they asymptotically behave like integrated Cox-Ingersoll-Ross models. Thus, modeling financial order flows as nearly unstable Hawkes processes may be a good way to reproduce both their high and low frequency stylized facts. We then extend this result to the Hawkes-based price model introduced by Bacry et al. [Quant. Finance 13 (2013) 65-77]. We show that under a similar criticality condition, this process converges to a Heston model. Again, we recover well-known stylized facts of prices, both at the microstructure level and at the macroscopic scale.Comment: Published in at http://dx.doi.org/10.1214/14-AAP1005 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Computation of Gaussian orthant probabilities in high dimension

    Full text link
    We study the computation of Gaussian orthant probabilities, i.e. the probability that a Gaussian falls inside a quadrant. The Geweke-Hajivassiliou-Keane (GHK) algorithm [Genz, 1992; Geweke, 1991; Hajivassiliou et al., 1996; Keane, 1993], is currently used for integrals of dimension greater than 10. In this paper we show that for Markovian covariances GHK can be interpreted as the estimator of the normalizing constant of a state space model using sequential importance sampling (SIS). We show for an AR(1) the variance of the GHK, properly normalized, diverges exponentially fast with the dimension. As an improvement we propose using a particle filter (PF). We then generalize this idea to arbitrary covariance matrices using Sequential Monte Carlo (SMC) with properly tailored MCMC moves. We show empirically that this can lead to drastic improvements on currently used algorithms. We also extend the framework to orthants of mixture of Gaussians (Student, Cauchy etc.), and to the simulation of truncated Gaussians

    Bias correction and confidence intervals following sequential tests

    Full text link
    An important statistical inference problem in sequential analysis is the construction of confidence intervals following sequential tests, to which Michael Woodroofe has made fundamental contributions. This paper reviews Woodroofe's method and other approaches in the literature. In particular it shows how a bias-corrected pivot originally introduced by Woodroofe can be used as an improved root for sequential bootstrap confidence intervals.Comment: Published at http://dx.doi.org/10.1214/074921706000000590 in the IMS Lecture Notes--Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore