147 research outputs found

    On the Monotonicity of the Generalized Marcum and Nuttall Q-Functions

    Full text link
    Monotonicity criteria are established for the generalized Marcum Q-function, \emph{Q}_{M}, the standard Nuttall Q-function, \emph{Q}_{M,N}, and the normalized Nuttall Q-function, QM,N\mathcal{Q}_{M,N}, with respect to their real order indices M,N. Besides, closed-form expressions are derived for the computation of the standard and normalized Nuttall Q-functions for the case when M,N are odd multiples of 0.5 and MNM\geq N. By exploiting these results, novel upper and lower bounds for \emph{Q}_{M,N} and QM,N\mathcal{Q}_{M,N} are proposed. Furthermore, specific tight upper and lower bounds for \emph{Q}_{M}, previously reported in the literature, are extended for real values of M. The offered theoretical results can be efficiently applied in the study of digital communications over fading channels, in the information-theoretic analysis of multiple-input multiple-output systems and in the description of stochastic processes in probability theory, among others.Comment: Published in IEEE Transactions on Information Theory, August 2009. Only slight formatting modification

    Uses of the Hypergeometric Distribution for Determining Survival or Complete Representation of Subpopulations in Sequential Sampling

    Get PDF
    This thesis will explore the hypergeometric probability distribution by looking at many different aspects of the distribution. These include, and are not limited to: history and origin, derivation and elementary applications, properties, relationships to other probability models, kindred hypergeometric distributions and elements of statistical inference associated with the hypergeometric distribution. Once the above are established, an investigation into and furthering of work done by Walton (1986) and Charlambides (2005) will be done. Here, we apply the hypergeometric distribution to sequential sampling in order to determine a surviving subcategory as well as study the problem of and complete representation of the subcategories within the population

    A bivariate generalization of gamma distribution

    Get PDF
    In this article, a bivariate generalisation of the gamma distribution is proposed by using an unsymmetrical bivariate characteristic function; an extension to the non central case also receives attention. The probability density functions of the product and ratio of the correlated components of this distribution are also derived. The benefits of introducing this generalized bivariate gamma distribution and the distributions of the product and the ratio of its components will be demonstrated by graphical representations of their density functions. An example of this generalized bivariate gamma distribution to rainfall data for two specific districts in the North West province is also given to illustrate the greater versatility of the new distribution.National Research Foundation,South Africa (GRANT: Unlocking the future : FA2007043000003).http://www.tandfonline.com/loi/lsta20hb201

    The compounding method for finding bivariate noncentral distributions

    Get PDF
    The univariate and bivariate central chi-square- and F distributions have received a decent amount of attention in the literature during the past few decades; the noncentral counterparts of these distributions have been much less present. This study enriches the existing literature by proposing bivariate noncentral chi-square and F distributions via the employment of the compounding method with Poisson probabilities. This method has been used to a limited extent in the field of distribution theory to obtain univariate noncentral distributions; this study extends some results in literature to the corresponding bivariate setting. The process which is followed to obtain such bivariate noncentral distributions is systematically described and motivated. Some distributions of composites (univariate functions of the dependent components of the bivariate distributions) are derived and studied, in particular the product, ratio, and proportion. The benefit of introducing these bivariate noncentral distributions and their respective composites is demonstrated by graphical representations of their probability density functions. Furthermore, an example of possible application is given and discussed to illustrate the versatility of the proposed models.Dissertation (MSc)--University of Pretoria, 2014.StatisticsMScUnrestricte

    Maximum Likelihood Estimation of Latent Affine Processes

    Get PDF
    This article develops a direct filtration-based maximum likelihood methodology for estimating the parameters and realizations of latent affine processes. The equivalent of Bayes' rule is derived for recursively updating the joint characteristic function of latent variables and the data conditional upon past data. Likelihood functions can consequently be evaluated directly by Fourier inversion. An application to daily stock returns over 1953-96 reveals substantial divergences from EMM-based estimates: in particular, more substantial and time-varying jump risk.

    An adapted discrete Lindley model emanating from negative binomial mixtures for autoregressive counts

    Get PDF
    Analysing autoregressive counts over time remains a relevant and evolving matter of interest, where oftentimes the assumption of normality is made for the error terms. In the case when data are discrete, the Poisson model may be assumed for the structure of the error terms. In order to address the equidispersion restriction of the Poisson distribution, various alternative considerations have been investigated in such an integer environment. This paper, inspired by the integer autoregressive process of order 1, incorporates negative binomial shape mixtures via a compound Poisson Lindley model for the error terms. The systematic construction of this model is offered and motivated, and is analysed comparatively against common alternate candidates with a number of simulation and data analyses. This work provides insight into noncentral-type behaviour in both the continuous Lindley model and in the discrete case for meaningful application and consideration in integer autoregressive environments.The National Research Foundation (NRF) of South Africa; the RDP296/2022 grant from the University of Pretoria, South Africa; the Department of Library Services based at the University of Pretoria; the University Capacity Development; and the Centre of Excellence in Mathematical and Statistical Sciences based at the University of the Witwatersrand, Johannesburg, South Africa.https://www.mdpi.com/journal/mathematicsam2023Statistic

    Glosarium Matematika

    Get PDF

    Recursive marginal quantization: extensions and applications in finance

    Get PDF
    Quantization techniques have been used in many challenging finance applications, including pricing claims with path dependence and early exercise features, stochastic optimal control, filtering problems and the efficient calibration of large derivative books. Recursive marginal quantization of an Euler scheme has recently been proposed as an efficient numerical method for evaluating functionals of solutions of stochastic differential equations. This algorithm is generalized and it is shown that it is possible to perform recursive marginal quantization for two higher-order schemes: the Milstein scheme and a simplified weak-order 2.0 scheme. Furthermore, the recursive marginal quantization algorithm is extended by showing how absorption and reflection at the zero boundary may be incorporated. Numerical evidence is provided of the improved weak-order convergence and computational efficiency for the geometric Brownian motion and constant elasticity of variance models by pricing European, Bermudan and barrier options. The current theoretical error bound is extended to apply to the proposed higher-order methods. When applied to two-factor models, recursive marginal quantization becomes computationally inefficient as the optimization problem usually requires stochastic methods, for example, the randomized Lloyd’s algorithm or Competitive Learning Vector Quantization. To address this, a new algorithm is proposed that allows recursive marginal quantization to be applied to two-factor stochastic volatility models while retaining the efficiency of the original Newton-Raphson gradientdescent technique. The proposed method is illustrated for European options on the Heston and Stein-Stein models and for various exotic options on the popular SABR model. Finally, the recursive marginal quantization algorithm, and improvements, are applied outside the traditional risk-neutral pricing framework by pricing long-dated contracts using the benchmark approach. The growth-optimal portfolio, the central object of the benchmark approach, is modelled using the time-dependent constant elasticity of variance model. Analytic European option prices are derived that generalize the current formulae in the literature. The time-dependent constant elasticity of variance model is then combined with a 3/2 stochastic short rate model to price zerocoupon bonds and zero-coupon bond options, thereby showing the departure from risk-neutral pricing
    corecore