193,915 research outputs found

    Estimation of a Covariance Matrix with Zeros

    Full text link
    We consider estimation of the covariance matrix of a multivariate random vector under the constraint that certain covariances are zero. We first present an algorithm, which we call Iterative Conditional Fitting, for computing the maximum likelihood estimator of the constrained covariance matrix, under the assumption of multivariate normality. In contrast to previous approaches, this algorithm has guaranteed convergence properties. Dropping the assumption of multivariate normality, we show how to estimate the covariance matrix in an empirical likelihood approach. These approaches are then compared via simulation and on an example of gene expression.Comment: 25 page

    Parametric inference and forecasting in continuously invertible volatility models

    Get PDF
    We introduce the notion of continuously invertible volatility models that relies on some Lyapunov condition and some regularity condition. We show that it is almost equivalent to the volatilities forecasting efficiency of the parametric inference approach based on the Stochastic Recurrence Equation (SRE) given in Straumann (2005). Under very weak assumptions, we prove the strong consistency and the asymptotic normality of an estimator based on the SRE. From this parametric estimation, we deduce a natural forecast of the volatility that is strongly consistent. We successfully apply this approach to recover known results on univariate and multivariate GARCH type models where our estimator coincides with the QMLE. In the EGARCH(1,1)model, we apply this approach to find a strongly consistence forecast and to prove that our estimator is asymptotically normal when the limiting covariance matrix exists. Finally, we give some encouraging empirical results of our approach on simulations and real data.Invertibility, volatility models, parametric estimation, strong consistency, asymptotic normality, asymmetric GARCH, exponential GARCH, stochastic recurrence equation, stationarity

    Empirical likelihood confidence intervals for complex sampling designs

    No full text
    We define an empirical likelihood approach which gives consistent design-based confidence intervals which can be calculated without the need of variance estimates, design effects, resampling, joint inclusion probabilities and linearization, even when the point estimator is not linear. It can be used to construct confidence intervals for a large class of sampling designs and estimators which are solutions of estimating equations. It can be used for means, regressions coefficients, quantiles, totals or counts even when the population size is unknown. It can be used with large sampling fractions and naturally includes calibration constraints. It can be viewed as an extension of the empirical likelihood approach to complex survey data. This approach is computationally simpler than the pseudoempirical likelihood and the bootstrap approaches. The simulation study shows that the confidence interval proposed may give better coverages than the confidence intervals based on linearization, bootstrap and pseudoempirical likelihood. Our simulation study shows that, under complex sampling designs, standard confidence intervals based on normality may have poor coverages, because point estimators may not follow a normal sampling distribution and their variance estimators may be biased.<br/

    On some strategies using auxiliary information for estimating finite population mean

    Get PDF
    This paper presents an empirical investigation of the performance of five strategies for estimating the finite population mean using parameters such as mean or variance or both of an auxiliary variable. The criteria used for the choices of these strategies are bias, efficiency and approach to normality (asymmetry)

    On some strategies using auxiliary information for estimating finite population mean

    Get PDF
    This paper presents an empirical investigation of the performance of five strategies for estimating the finite population mean using parameters such as mean or variance or both of an auxiliary variable. The criteria used for the choices of these strategies are bias, efficiency and approach to normality (asymmetry)

    Continuous invertibility and stable QML estimation of the EGARCH(1,1) model

    Get PDF
    We introduce the notion of continuous invertibility on a compact set for volatility models driven by a Stochastic Recurrence Equation (SRE). We prove the strong consistency of the Quasi Maximum Likelihood Estimator (QMLE) when the optimization procedure is done on a continuously invertible domain. This approach gives for the first time the strong consistency of the QMLE used by Nelson in \cite{nelson:1991} for the EGARCH(1,1) model under explicit but non observable conditions. In practice, we propose to stabilize the QMLE by constraining the optimization procedure to an empirical continuously invertible domain. The new method, called Stable QMLE (SQMLE), is strongly consistent when the observations follow an invertible EGARCH(1,1) model. We also give the asymptotic normality of the SQMLE under additional minimal assumptions

    Combining parametric and nonparametric approaches for more efficient time series prediction

    Get PDF
    We introduce a two-step procedure for more efficient nonparametric prediction of a strictly stationary process admitting an ARMA representation. The procedure is based on the estimation of the ARMA representation, followed by a nonparametric regression where the ARMA residuals are used as explanatory variables. Compared to standard nonparametric regression methods, the number of explanatory variables can be reduced because our approach exploits the linear dependence of the process. We establish consistency and asymptotic normality results for our estimator. A Monte Carlo study and an empirical application on stock market indices suggest that significant gains can be achieved with our approach.ARMA representation; noisy data; Nonparametric regression; optimal prediction

    A Non-Gaussian Approach to Risk Measures

    Get PDF
    Reliable calculations of financial risk require that the fat-tailed nature of prices changes is included in risk measures. To this end, a non-Gaussian approach to financial risk management is presented, modeling the power-law tails of the returns distribution in terms of a Student-t distribution. Non-Gaussian closed-form solutions for Value-at-Risk and Expected Shortfall are obtained and standard formulae known in the literature under the normality assumption are recovered as a special case. The implications of the approach for risk management are demonstrated through an empirical analysis of financial time series from the Italian stock market and in comparison with the results of the most widely used procedures of quantitative finance. Particular attention is paid to quantify the size of the errors affecting the market risk measures obtained according to different methodologies, by employing a bootstrap technique.Comment: Latex 15 pages, 3 figures and 5 tables 68% c. levels for tail exponents corrected, conclusions unchange
    corecore