88 research outputs found

    A Direct Test of Rational Bubbles

    Get PDF
    The recent introduction of new derivatives with future dividend payments as underlyings allows to construct a direct test of rational bubbles. We suggest a simple, new method to calculate the fundamental value of stock indices. Using this approach, bubbles become observable. We calculate the time series of the bubble component of the Euro-Stoxx 50 index and investigate its properties. Using a formal hypothesis test we find that the behavior of the bubble is compatible with rationality.speculative rational bubbles, martingale tests, fundamental value, dividend expectation

    Estimating continuous-time income models

    Get PDF
    While earning processes are commonly unobservable income flows which evolve in continuous time, observable income data are usually discrete, having been aggregated over time. We consider continuous-time earning processes, specifically (non-linearly) transformed Ornstein-Uhlenbeck processes, and the associated integrated, i.e. time aggregated process. Both processes are characterised, and we show that time aggregation alters important statistical properties. The parameters of the earning process are estimable by GMM, and the finite sample properties of the estimator are investigated. Our methods are applied to annual earnings data for the US. It is demonstrated that the model replicates well important features of the earnings distribution. Keywords; integrated non-linearly transformed ornstein-uhlenbeck process, temporal aggregation

    Weak convergence to the t-distribution

    Get PDF
    We present a new limit theorem for random means: if the sample size is not deterministic but has a negative binomial or geometric distribution, the limit distribution of the normalised random mean is a t-distribution with degrees of freedom depending on the shape parameter of the negative binomial distribution. Thus the limit distribution exhibits exhibits heavy tails, whereas limit laws for random sums do not achieve this unless the summands have innite variance. The limit law may help explain several empirical regularities. We consider two such examples: rst, a simple model is used to explain why city size growth rates are approximately t-distributed. Second, a random averaging argument can account for the heavy tails of high-frequency returns. Our empirical investigations demonstrate that these predictions are borne out by the data.convergence, t-distribution, limit theorem

    The Dynamics of Brand Equity: A Hedonic Regression Approach to the Laser Printer Market

    Get PDF
    The authors develop a dynamic approach to measuring the evolution of comparative brand premium, an important component of brand equity. A comparative brand premium is defined as the pairwise price difference between two products being identical in every respect but brand. The model is based on hedonic regressions and grounded in economic theory. In constrast to existing approaches, the authors explicitly take into account and model the dynamics of the brand premia. By exploiting the premiaā€™s intertemporal dependence structure, the Bayesian estimation method produces more accurate estimators of the time paths of the brand premia than other methods. In addition, the authors present a novel yet straightforward way to construct confidence bands that cover the entire time series of brand premia with high probability. The data required for estimation are readily available, cheap, and observable on the market under investigation. The authors apply the dynamic hedonic regression to a large and detailed data set about laser printers gathered on a monthly basis over a four-year period. It transpires that, in general, the estimated brand premia change only gradually from period to period. Nevertheless the method can diagnose sudden downturns of a comparative brand premium. The authorsā€™ dynamic hedonic regression approach facilitates the practical evaluation of brand management.brand equity, price premium, hedonic regression, Bayesian estimation, dynamic linear model

    Age-specific entrepreneurship and PAYG: public pensions in Germany

    Get PDF

    Nonparametric inference for second order stochastic dominance

    Get PDF
    This paper deals with nonparametric inference for second order stochastic dominance of two random variables. If their distribution functions are unknown they have to be inferred from observed realizations. Thus, any results on stochastic dominance are in uenced by sampling errors. We establish two methods to take the sampling error into account. The first one is based on the asymptotic normality of point estimators, while the second one, relying on resampling techniques, can also cope with small sample sizes. Both methods are used to develop statistical tests for second order stochastic dominance. We argue, however, that tests based on resampling techniques are more useful in practical applications. Their power in small samples is estimated by Monte Carlo simulations for a couple of alternative distributions. We further show that these tests can also be used for testing for first order stochastic dominance, often having a higher power than tests specifically designed for first order stochastic dominance such as the Kolmogorov-Smirnov test or the Wilcoxon-Mann-Whitney test. The results of this paper are relevant in various fields such as finance, life testing and decision under risk. --second order stochastic dominance,nonparametric inference,permutation tests,Monte Carlo methods

    Estimating the degree of interventionist policies in the run-up to EMU

    Full text link
    Based on a theoretical monetary exchange-rate model in continuous time this paper establishes a sequential estimation framework which is capable of indicating central bank intervention in the run-up to a currency union. Using daily pre-EMU exchange-rate data for the countries of the current euro zone, we find mixed evidence of active pre-EMU intervention policies (so-called institutional frontloading strategies). Our estimation framework is highly relevant to economic and political agents operating in financial markets of the upcoming EMU accession countries

    Bayesian semiparametric multivariate stochastic volatility with application

    Get PDF
    In this article, we establish a Cholesky-type multivariate stochastic volatility estimation framework, in which we let the innovation vector follow a Dirichlet process mixture (DPM), thus enabling us to model highly flexible return distributions. The Cholesky decomposition allows parallel univariate process modeling and creates potential for estimating high-dimensional specifications. We use Markov chain Monte Carlo methods for posterior simulation and predictive density computation. We apply our framework to a five-dimensional stock-return data set and analyze international stock-market co-movements among the largest stock markets. The empirical results show that our DPM modeling of the innovation vector yields substantial gains in out-of-sample density forecast accuracy when compared with the prevalent benchmark models
    • ā€¦
    corecore