116 research outputs found

    Penalised Maximum Likelihood Estimation for Fractional Gaussian Processes

    Get PDF
    We apply and extend Firth’s (1993) modified score estimator to deal with a class of stationary Gaussian long-memory processes. Our estimator removes the first order bias of the maximum likelihood estimator. A small simulation study reveals the reduction in the bias is considerable, while it does not inflate the corresponding mean squared error

    Error Bounds and Asymptotic Expansions for Toeplitz Product Functionals of Unbounded Spectra

    Get PDF
    This paper establishes error orders for integral limit approximations to traces of powers to the pth order) of products of Toeplitz matrices. Such products arise frequently in the analysis of stationary time series and in the development of asymptotic expansions. The elements of the matrices are Fourier transforms of functions which we allow to be bounded, unbounded, or even to vanish on [-pi,pi], thereby including important cases such as the spectral functions of fractional processes. Error rates are also given in the case in which the matrix product involves inverse matrices. The rates are sharp up to an arbitrarily small epsilon > 0. The results improve on the o(1) rates obtained in earlier work for analogous products. For the p = 1 case, an explicit second order asymptotic expansion is found for a quadratic functional of the autocovariance sequences of stationary long memory time series. The order of magnitude of the second term in this expansion is shown to depend on the long memory parameters. It is demonstrated that the pole in the first order approximation is removed by the second order term, which provides a substantially improved approximation to the original functional.Asymptotic expansion, higher cumulants, long memory, singularity, spectral density, Toeplitz matrix

    A Complete Asymptotic Series for the Autocovariance Function of a Long Memory Process

    Get PDF
    An infinite-order asymptotic expansion is given for the autocovariance function of a general stationary long-memory process with memory parameter d in (-1/2,1/2). The class of spectral densities considered includes as a special case the stationary and invertible ARFIMA(p,d,q) model. The leading term of the expansion is of the order O(1/k^{1-2d}), where k is the autocovariance order, consistent with the well known power law decay for such processes, and is shown to be accurate to an error of O(1/k^{3-2d}). The derivation uses Erdélyi's (1956) expansion for Fourier-type integrals when there are critical points at the boundaries of the range of integration - here the frequencies {0,2}. Numerical evaluations show that the expansion is accurate even for small k in cases where the autocovariance sequence decays monotonically, and in other cases for moderate to large k. The approximations are easy to compute across a variety of parameter values and models.Autocovariance, Asymptotic expansion, Critical point, Fourier integral, Long memory

    Higher-order Improvements of the Parametric Bootstrap for Long-memory Gaussian Processes

    Get PDF
    This paper determines coverage probability errors of both delta method and parametric bootstrap confidence intervals (CIs) for the covariance parameters of stationary long-memory Gaussian time series. CIs for the long-memory parameter d_0 are included. The results establish that the bootstrap provides higher-order improvements over the delta method. Analogous results are given for tests. The CIs and tests are based on one or other of two approximate maximum likelihood estimators. The first estimator solves the first-order conditions with respect to the covariance parameters of a "plug-in" log-likelihood function that has the unknown mean replaced by the sample mean. The second estimator does likewise for a plug-in Whittle log-likelihood. The magnitudes of the coverage probability errors for one-sided bootstrap CIs for covariance parameters for long-memory time series are shown to be essentially the same as they are with iid data. This occurs even though the mean of the time series cannot be estimated at the usual n^{1/2} rate.Asymptotics, confidence intervals, delta method, Edgeworth expansion, Gaussian process, long memory, maximum likelihood estimator, parametric bootstrap, t statistic, Whittle likelihood

    Expansions for Approximate Maximum Likelihood Estimators of the Fractional Difference Parameter

    Get PDF
    This paper derives second-order expansions for the distributions of the Whittle and profile plug-in maximum likelihood estimators of the fractional difference parameter in the ARFIMA(0,d,0) with unknown mean and variance. Both estimators are shown to be second-order pivotal. This extends earlier findings of Lieberman and Phillips (2001), who derived expansions for the Gaussian maximum likelihood estimator under the assumption that the mean and variance are known. One implication of the results is that the parametric bootstrap upper one-sided confidence interval provides an o(n^{-1}ln n) improvement over the delta method. For statistics that are not second-order pivotal, the improvement is generally only of the order o(n^{-1/2}ln n).Bootstrap; Edgeworth expansion; Fractional differencing; Pivotal statistic

    Penalised Maximum Likelihood Estimation for Fractional Guassian Processes

    Get PDF
    We apply and extend Firth's (1993) modified score estimator to deal with a class of stationary Gaussian long-memory processes. Our estimator removes the first order bias of the maximum likelihood estimator. A small simulation study reveals the reduction in the bias is considerable, while it does not inflate the corresponding mean squared error.ARFIMA, Firth's formula, fractional differencing, approximate modification

    Empirical Similarity

    Get PDF
    An agent is asked to assess a real-valued variable Y_{p} based on certain characteristics X_{p} = (X_{p}^{1},...,X_{p}^{m}), and on a database consisting (X_{i}^{1},...,X_{i}^{m},Y_{i}) for i = 1,...,n. A possible approach to combine past observations of X and Y with the current values of X to generate an assessment of Y is similarity-weighted averaging. It suggests that the predicted value of Y, Y_{p}^{s}, be the weighted average of all previously observed values Y_{i}, where the weight of Y_{i}, for every i =1,...,n, is the similarity between the vector X_{p}^{1},...,X_{p}^{m}, associated with Y_{p}, and the previously observed vector, X_{i}^{1},...,X_{i}^{m}. We axiomatize this rule. We assume that, given every database, a predictor has a ranking over possible values, and we show that certain reasonable conditions on these rankings imply that they are determined by the proximity to a similarity-weighted average for a certain similarity function. The axiomatization does not suggest a particular similarity function, or even a particular functional form of this function. We therefore proceed to suggest that the similarity function be estimated from past observations. We develop tools of statistical inference for parametric estimation of the similarity function, for the case of a continuous as well as a discrete variable. Finally, we discuss the relationship of the proposed method to other methods of estimation and prediction.Similarity, estimation

    Rule-Based and Case-Based Reasoning in Housing Prices

    Get PDF
    People reason about real-estate prices both in terms of general rules and in terms of analogies to similar cases. We propose to empirically test which mode of reasoning fits the data better. To this end, we develop the statistical techniques required for the estimation of the case-based model. It is hypothesized that case-based reasoning will have relatively more explanatory power in databases of rental apartments, whereas rule-based reasoning will have a relative advantage in sales data. We motivate this hypothesis on theoretical grounds, and find empirical support for it by comparing the two statistical techniques (rule-based and case-based) on two databases (rentals and sales).Housing, similarity, regression, case-based reasoning, rule-based reasoning

    Refined Inference on Long Memory in Realized Volatility

    Get PDF
    There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et. al. (2001), Martens et al. (2004)). The present paper provides some analytical explanations for this evidence and shows how recent results in Lieberman and Phillips (2004a, 2004b) can be used to refine statistical inference about d with little computational effort. In contrast to standard asymptotic normal theory now used in the literature which has an O(n-1/2) error rate on error rejection probabilities, the asymptotic approximation used here has an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and highly user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et. al. (2001) and Martens et. al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory.ARFIMA; Edgeworth expansion; Fourier integral expansion; Fractional differencing; Improved inference; Long memory; Pivotal statistic; Realized volatility; Singularity
    corecore