6,242 research outputs found

    Business Cycle Asymmetries: Characterisationand Testing Based on Markov-Switching Autoregression.

    Get PDF
    We propose testing for business cycle asymmetries in Markov-switching autoregressive (MS-AR) models. We derive the parametric restrictions on MS-AR models that rule out types of asymmetries such as deepness, steepness, and sharpness, and set out a testing procedure based on Wald statistics which have standard asymptotic. For a two-regime model, such as that popularized by Hamilton (1989), we show that deepness implies sharpness (and vice versa) while the process is always non-steep. We illustrate with two and three-state MS models of US GNP growth, and with models of US output and employment. Our findings are compared with those obtained from standard non-parametric tests.BUSINESS CYCLES ; TESTS

    Robust estimators of ar-models : a comparison

    Get PDF
    Many regression-estimation techniques have been extended to cover the case of dependent observations. The majority of such techniques are developed from the classical least squares, M and GM approaches and their properties have been investigated both on theoretical and empirical grounds. However, the behavior of some alternative methods- with satisfactory performance in the regression case- has not received equal attention in the context of time series. A simulation study of four robust estimators for autoregressive models containing innovation or additive outliers is presented. The robustness and efficiency properties of the methods are exhibited, some finite-sample results are discussed in combination with theoretical properties and the relative merits of the estimators are viewed in connection with the outlier-generating scheme.peer-reviewe

    Forecasting spot electricity prices: A comparison of parametric and semiparametric time series models

    Get PDF
    This empirical paper compares the accuracy of 12 time series methods for short-term (day-ahead) spot price forecasting in auction-type electricity markets. The methods considered include standard autoregression (AR) models, their extensions – spike preprocessed, threshold and semiparametric autoregressions (i.e. AR models with nonparametric innovations), as well as, mean-reverting jump diffusions. The methods are compared using a time series of hourly spot prices and system-wide loads for California and a series of hourly spot prices and air temperatures for the Nordic market. We find evidence that (i) models with system load as the exogenous variable generally perform better than pure price models, while this is not necessarily the case when air temperature is considered as the exogenous variable, and that (ii) semiparametric models generally lead to better point and interval forecasts than their competitors, more importantly, they have the potential to perform well under diverse market conditions.Electricity market, Price forecast, Autoregressive model, Nonparametric maximum likelihood, Interval forecast, Conditional coverage

    A Local Instrumental Variable Estimation Method for Generalized Additive Volatility Models

    Get PDF
    We investigate a new separable nonparametric model for time series, which includes many ARCH models and AR models already discussed in the literature. We also propose a new estimation procedure based on a localization of the econometric method of instrumental variables. Our method has considerable computational advantages over the competing marginal integration or projection method.ARCH, kernel estimation, nonparametric, volatility.

    Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

    Full text link
    As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of l1-norm optimization techniques, and the fact that natural images are intrinsically sparse in some domain. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a pre-collected dataset of example image patches, and then for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image non-local self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.Comment: 35 pages. This paper is under review in IEEE TI
    corecore