4,111 research outputs found

    Bootstrapping Cointegrating Regressions

    Get PDF
    In this paper, we consider bootstrapping cointegrating regressions. It is shown that the method of bootstrap, if properly implemented, generally yields consistent estimators and test statistics for cointegrating regressions. We do not assume any specific data generating process, and employ the sieve bootstrap based on the approximated finite-order vector autoregressions for the regression errors and the firrst differences of the regressors. In particular, we establish the bootstrap consistency for OLS method. The bootstrap method can thus be used to correct for the finite sample bias of the OLS estimator and to approximate the asymptotic critical values of the OLS-based test statistics in general cointegrating regressions. The bootstrap OLS procedure, however, is not efficient. For the efficient estimation and hypothesis testing, we consider the procedure proposed by Saikkonen (1991) and Stock and Watson (1993) relying on the regression augmented with the leads and lags of differenced regressors. The bootstrap versions of their procedures are shown to be consistent, and can be used to do inferences that are asymptotically valid. A Monte Carlo study is conducted to investigate the finite sample performances of the proposed bootstrap methods.

    Deep Metric Learning via Facility Location

    Full text link
    Learning the representation and the similarity metric in an end-to-end fashion with deep networks have demonstrated outstanding results for clustering and retrieval. However, these recent approaches still suffer from the performance degradation stemming from the local metric training procedure which is unaware of the global structure of the embedding space. We propose a global metric learning scheme for optimizing the deep metric embedding with the learnable clustering function and the clustering metric (NMI) in a novel structured prediction framework. Our experiments on CUB200-2011, Cars196, and Stanford online products datasets show state of the art performance both on the clustering and retrieval tasks measured in the NMI and Recall@K evaluation metrics.Comment: Submission accepted at CVPR 201
    corecore