2,730 research outputs found
Reproducibility in forecasting research
The importance of replication has been recognised across many scientific disciplines. Reproducibility is a necessary condition for replicability, because an inability to reproduce results implies that the methods have not been specified sufficiently, thus precluding replication. This paper describes how two independent teams of researchers attempted to reproduce the empirical findings of an important paper, ‘‘Shrinkage estimators of time series seasonal factors and their effect on forecasting accuracy’’ (Miller & Williams, 2003). The two teams proceeded systematically, reporting results both before and after receiving clarifications from the authors of the original study. The teams were able to approximately reproduce each other’s results, but not those of Miller and Williams. These discrepancies led to differences in the conclusions as to the conditions under which seasonal damping outperforms classical decomposition. The paper specifies the forecasting methods employed using a flowchart. It is argued that this approach to method documentation is complementary to the provision of computer code, as it is accessible to a broader audience of forecasting
practitioners and researchers. The significance of this research lies not only in its lessons for seasonal forecasting but also, more generally, in its approach to the reproduction of
forecasting research
Reproducibility in forecasting research
The importance of replication has been recognised across many scientific disciplines. Reproducibility is a necessary condition for replicability, because an inability to reproduce results implies that the methods have not been specified sufficiently, thus precluding replication. This paper describes how two independent teams of researchers attempted to reproduce the empirical findings of an important paper, ‘‘Shrinkage estimators of time series seasonal factors and their effect on forecasting accuracy’’ (Miller & Williams, 2003). The two teams proceeded systematically, reporting results both before and after receiving clarifications from the authors of the original study. The teams were able to approximately reproduce each other’s results, but not those of Miller and Williams. These discrepancies led to differences in the conclusions as to the conditions under which seasonal damping outperforms classical decomposition. The paper specifies the forecasting methods employed using a flowchart. It is argued that this approach to method documentation is complementary to the provision of computer code, as it is accessible to a broader audience of forecasting
practitioners and researchers. The significance of this research lies not only in its lessons for seasonal forecasting but also, more generally, in its approach to the reproduction of
forecasting research
Wavelet Estimators in Nonparametric Regression: A Comparative Simulation Study
Wavelet analysis has been found to be a powerful tool for the nonparametric estimation of spatially-variable objects. We discuss in detail wavelet methods in nonparametric regression, where the data are modelled as observations of a signal contaminated with additive Gaussian noise, and provide an extensive review of the vast literature of wavelet shrinkage and wavelet thresholding estimators developed to denoise such data. These estimators arise from a wide range of classical and empirical Bayes methods treating either individual or blocks of wavelet coefficients. We compare various estimators in an extensive simulation study on a variety of sample sizes, test functions, signal-to-noise ratios and wavelet filters. Because there is no single criterion that can adequately summarise the behaviour of an estimator, we use various criteria to measure performance in finite sample situations. Insight into the performance of these estimators is obtained from graphical outputs and numerical tables. In order to provide some hints of how these estimators should be used to analyse real data sets, a detailed practical step-by-step illustration of a wavelet denoising analysis on electrical consumption is provided. Matlab codes are provided so that all figures and tables in this paper can be reproduced
Fast Covariance Estimation for High-dimensional Functional Data
For smoothing covariance functions, we propose two fast algorithms that scale
linearly with the number of observations per function. Most available methods
and software cannot smooth covariance matrices of dimension with
; the recently introduced sandwich smoother is an exception, but it is
not adapted to smooth covariance matrices of large dimensions such as . Covariance matrices of order , and even , are
becoming increasingly common, e.g., in 2- and 3-dimensional medical imaging and
high-density wearable sensor data. We introduce two new algorithms that can
handle very large covariance matrices: 1) FACE: a fast implementation of the
sandwich smoother and 2) SVDS: a two-step procedure that first applies singular
value decomposition to the data matrix and then smoothes the eigenvectors.
Compared to existing techniques, these new algorithms are at least an order of
magnitude faster in high dimensions and drastically reduce memory requirements.
The new algorithms provide instantaneous (few seconds) smoothing for matrices
of dimension and very fast ( 10 minutes) smoothing for
. Although SVDS is simpler than FACE, we provide ready to use,
scalable R software for FACE. When incorporated into R package {\it refund},
FACE improves the speed of penalized functional regression by an order of
magnitude, even for data of normal size (). We recommend that FACE be
used in practice for the analysis of noisy and high-dimensional functional
data.Comment: 35 pages, 4 figure
Comparison of |Q|=1 and |Q|=2 gauge-field configurations on the lattice four-torus
It is known that exactly self-dual gauge-field configurations with
topological charge |Q|=1 cannot exist on the untwisted continuum 4-torus. We
explore the manifestation of this remarkable fact on the lattice 4-torus for
SU(3) using advanced techniques for controlling lattice discretization errors,
extending earlier work of De Forcrand et. al. for SU(2). We identify three
distinct signals for the instability of |Q|=1 configurations, and show that
these manifest themselves early in the cooling process, long before the
would-be instanton has shrunk to a size comparable to the lattice
discretization threshold. These signals do not appear for our |Q|=2
configurations. This indicates that these signals reflect the truly global
nature of the instability, rather than local discretization effects.
Monte-Carlo generated SU(3) gauge field configurations are cooled to the
self-dual limit using an O(a^4)-improved gauge action chosen to have small but
positive O(a^6) errors. This choice prevents lattice discretization errors from
destroying instantons provided their size exceeds the dislocation threshold of
the cooling algorithm. Lattice discretization errors are evaluated by comparing
the O(a^4)-improved gauge-field action with an O(a^4)-improved action
constructed from the square of an O(a^4)-improved lattice field-strength
tensor, thus having different O(a^6) discretization errors. The number of
action-density peaks, the instanton size and the topological charge of
configurations is monitored. We observe a fluctuation in the total topological
charge of |Q|=1 configurations, and demonstrate that the onset of this unusual
behavior corresponds with the disappearance of multiple-peaks in the action
density. At the same time discretization errors are minimal.Comment: 12 pages, 9 figures, submitted to Phys. Rev.
- …