12,446 research outputs found

    Line Profiles from Discrete Kinematic Data

    Full text link
    We develop a method to extract the shape information of line profiles from discrete kinematic data. The Gauss-Hermite expansion, which is widely used to describe the line of sight velocity distributions extracted from absorption spectra of elliptical galaxies, is not readily applicable to samples of discrete stellar velocity measurements, accompanied by individual measurement errors and probabilities of membership. We introduce two parameter families of probability distributions describing symmetric and asymmetric distortions of the line profiles from Gaussianity. These are used as the basis of a maximum likelihood estimator to quantify the shape of the line profiles. Tests show that the method outperforms a Gauss-Hermite expansion for discrete data, with a lower limit for the relative gain of approx 2 for sample sizes N approx 800. To ensure that our methods can give reliable descriptions of the shape, we develop an efficient test to assess the statistical quality of the obtained fit. As an application, we turn our attention to the discrete velocity datasets of the dwarf spheroidals of the Milky Way. In Sculptor, Carina and Sextans the symmetric deviations are consistent with velocity distributions more peaked than Gaussian. In Fornax, instead, there is an evolution in the symmetric deviations of the line profile from a peakier to more flat-topped distribution on moving outwards. These results suggest a radially biased orbital structure for the outer parts of Sculptor, Carina and Sextans. On the other hand, tangential anisotropy is favoured in Fornax. This is all consistent with a picture in which Fornax may have had a different evolutionary history to Sculptor, Carina and Sextans.Comment: MNRAS, accepted for publication, minor change

    Robust estimation and inference for heavy tailed GARCH

    Get PDF
    We develop two new estimators for a general class of stationary GARCH models with possibly heavy tailed asymmetrically distributed errors, covering processes with symmetric and asymmetric feedback like GARCH, Asymmetric GARCH, VGARCH and Quadratic GARCH. The first estimator arises from negligibly trimming QML criterion equations according to error extremes. The second imbeds negligibly transformed errors into QML score equations for a Method of Moments estimator. In this case, we exploit a sub-class of redescending transforms that includes tail-trimming and functions popular in the robust estimation literature, and we re-center the transformed errors to minimize small sample bias. The negligible transforms allow both identification of the true parameter and asymptotic normality. We present a consistent estimator of the covariance matrix that permits classic inference without knowledge of the rate of convergence. A simulation study shows both of our estimators trump existing ones for sharpness and approximate normality including QML, Log-LAD, and two types of non-Gaussian QML (Laplace and Power-Law). Finally, we apply the tail-trimmed QML estimator to financial data.Comment: Published at http://dx.doi.org/10.3150/14-BEJ616 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Information-Theoretic Distribution Test with Application to Normality

    Get PDF
    We derive general distribution tests based on the method of Maximum Entropy density. The proposed tests are derived from maximizing the differential entropy subject to moment constraints. By exploiting the equivalence between the Maximum Entropy and Maximum Likelihood estimates of the general exponential family, we can use the conventional Likelihood Ratio, Wald and Lagrange Multiplier testing principles in the maximum entropy framework. In particular, we use the Lagrange Multiplier method to derive tests for normality and their asymptotic properties. Monte Carlo evidence suggests that the proposed tests have desirable small sample properties and often outperform commonly used tests such as the Jarque-Bera test and the Komogorov-Smirnov-Lillie test for normality. We show that the proposed tests can be extended to tests based on regression residuals and non-iid data in a straightforward manner. We apply the proposed tests to the residuals from a stochastic production frontier model and reject the normality hypothesis.

    Modeling and predicting market risk with Laplace-Gaussian mixture distributions

    Get PDF
    While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005

    Volatility forecasting

    Get PDF
    Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1

    Essays in the econometrics of dynamic duration models with application to tick by tick financial data.

    Get PDF
    This thesis organizes three contributions on the econometrics of duration in the context of high frequency financial data. We provide existence conditions and analytical expressions of the moments of Log-ACD models. We focus on the dispersion index and the autocorrelation function and compare them with those of ACD and SCD models We apply the effcient importance sampling (EIS) method for computing the high-dimensional integral required to evaluate the likelihood function of the stochastic conditional duration (SCD) model. We compare EIS-based ML estimation with QML estimation based on the Kalman filter. We find that EIS- ML estimation is more precise statistically, at a cost of an acceptable loss of quickness of computations. We illustrate this with simulated and real data. We show also that the EIS-ML method is easy to apply to extensions of the SCD model. We carry out a nonparametric analysis of financial durations. We make use of an existing algorithm to describe nonparametrically the dynamics of the process in terms of its lagged realizations and of a latent variable, its conditional mean. The devices needed to effectively apply the algorithm to our dataset are presented. We show that: on simulated data, the nonparametric procedure yields better estimates than the ones delivered by an incorrectly specified parametric method, while on a real dataset, the nonparametric analysis can convey information on the nature of the data generating process that may not be captured by the parametric specification.

    Heavy-Tailed Features and Empirical Analysis of the Limit Order Book Volume Profiles in Futures Markets

    Full text link
    This paper poses a few fundamental questions regarding the attributes of the volume profile of a Limit Order Books stochastic structure by taking into consideration aspects of intraday and interday statistical features, the impact of different exchange features and the impact of market participants in different asset sectors. This paper aims to address the following questions: 1. Is there statistical evidence that heavy-tailed sub-exponential volume profiles occur at different levels of the Limit Order Book on the bid and ask and if so does this happen on intra or interday time scales ? 2.In futures exchanges, are heavy tail features exchange (CBOT, CME, EUREX, SGX and COMEX) or asset class (government bonds, equities and precious metals) dependent and do they happen on ultra-high (<1sec) or mid-range (1sec -10min) high frequency data? 3.Does the presence of stochastic heavy-tailed volume profile features evolve in a manner that would inform or be indicative of market participant behaviors, such as high frequency algorithmic trading, quote stuffing and price discovery intra-daily? 4. Is there statistical evidence for a need to consider dynamic behavior of the parameters of models for Limit Order Book volume profiles on an intra-daily time scale ? Progress on aspects of each question is obtained via statistically rigorous results to verify the empirical findings for an unprecedentedly large set of futures market LOB data. The data comprises several exchanges, several futures asset classes and all trading days of 2010, using market depth (Type II) order book data to 5 levels on the bid and ask

    On the Probability Distribution of Economic Growth

    Get PDF
    Normality is often mechanically and without sufficient reason assumed in econometric models. In this paper three important and significantly heteroscedastic GDP series are studied. Heteroscedasticity is removed and the distributions of the filtered series are then compared to a Normal, a Normal-Mixture and Normal-Asymmetric Laplace (NAL) distributions. NAL represents a reduced and empirical form of the Aghion and Howitt (1992) model for economic growth, based on Schumpeter's idea of creative destruction. Statistical properties of the NAL distributions are provided and it is shown that NAL competes well with the alternatives.The Aghion-Howitt model, asymmetric innovations, mixed normal- asymmetric Laplace distribution, Kernel density estimation, Method of Moments estimation.
    corecore