72,329 research outputs found

    Nonparametric modeling and forecasting electricity demand: an empirical study

    Get PDF
    This paper uses half-hourly electricity demand data in South Australia as an empirical study of nonparametric modeling and forecasting methods for prediction from half-hour ahead to one year ahead. A notable feature of the univariate time series of electricity demand is the presence of both intraweek and intraday seasonalities. An intraday seasonal cycle is apparent from the similarity of the demand from one day to the next, and an intraweek seasonal cycle is evident from comparing the demand on the corresponding day of adjacent weeks. There is a strong appeal in using forecasting methods that are able to capture both seasonalities. In this paper, the forecasting methods slice a seasonal univariate time series into a time series of curves. The forecasting methods reduce the dimensionality by applying functional principal component analysis to the observed data, and then utilize an univariate time series forecasting method and functional principal component regression techniques. When data points in the most recent curve are sequentially observed, updating methods can improve the point and interval forecast accuracy. We also revisit a nonparametric approach to construct prediction intervals of updated forecasts, and evaluate the interval forecast accuracy.Functional principal component analysis; functional time series; multivariate time series, ordinary least squares, penalized least squares; ridge regression; seasonal time series

    NP-optimal kernels for nonparametric sequential detection rules

    Get PDF
    An attractive nonparametric method to detect change-points sequentially is to apply control charts based on kernel smoothers. Recently, the strong convergence of the associated normed delay associated with such a sequential stopping rule has been studied under sequences of out-of-control models. Kernel smoothers employ a kernel function to downweight past data. Since kernel functions with values in the unit interval are sufficient for that task, we study the problem to optimize the asymptotic normed delay over a class of kernels ensuring that restriction and certain additional moment constraints. We apply the key theorem to discuss several important examples where explicit solutions exist to illustrate that the results are applicable. --Control charts,financial data,nonparametric regression,quality control,statistical genetics

    Optimal Bandwidth Choice for Interval Estimation in GMM Regression

    Get PDF
    In time series regression with nonparametrically autocorrelated errors, it is now standard empirical practice to construct confidence intervals for regression coefficients on the basis of nonparametrically studentized t-statistics. The standard error used in the studentization is typically estimated by a kernel method that involves some smoothing process over the sample autocovariances. The underlying parameter (M) that controls this tuning process is a bandwidth or truncation lag and it plays a key role in the finite sample properties of tests and the actual coverage properties of the associated confidence intervals. The present paper develops a bandwidth choice rule for M that optimizes the coverage accuracy of interval estimators in the context of linear GMM regression. The optimal bandwidth balances the asymptotic variance with the asymptotic bias of the robust standard error estimator. This approach contrasts with the conventional bandwidth choice rule for nonparametric estimation where the focus is the nonparametric quantity itself and the choice rule balances asymptotic variance with squared asymptotic bias. It turns out that the optimal bandwidth for interval estimation has a different expansion rate and is typically substantially larger than the optimal bandwidth for point estimation of the standard errors. The new approach to bandwidth choice calls for refined asymptotic measurement of the coverage probabilities, which are provided by means of an Edgeworth expansion of the finite sample distribution of the nonparametrically studentized t-statistic. This asymptotic expansion extends earlier work and is of independent interest. A simple plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples. Issues of interval length and false coverage probability are also considered, leading to a secondary approach to bandwidth selection with similar properties.Asymptotic expansion, Bias, Confidence interval, Coverage probability, Edgeworth expansion, Lag kernel, Long run variance, Optimal bandwidth, Spectrum

    Percentile Bootstrap Interval on Univariate Local Polynomial Regression Prediction

    Get PDF
    This study offers a new technique for constructing percentile bootstrap intervals to predict the regression of univariate local polynomials. Bootstrap regression uses resampling derived from paired and residual bootstrap methods. The main objective of this study is to perform a comparative analysis between the two resampling methods by considering the nominal coverage probability. Resampling uses a nonparametric bootstrap technique with the return method, where each sample point has an equal chance of being selected. The principle of nonparametric bootstrapping uses the original sample data as a source of diversity in contrast to parametric bootstrapping, where the variety comes from generating a particular distribution. The simulation results show that the paired and residual bootstrap interval coverage probabilities are close to nominal coverage. The results showed no significant difference between paired bootstrap interval and percentile residual. Increasing the bootstrap sample size sufficiently large gives the scatterplot smoothness of the confidence interval. Applying the smoothing parameter by choice gives a second-order polynomial regression with a smoother distribution than the first-order polynomial regression. The scatterplot shows that the second-degree polynomial regression can capture the data curvature feature compared to the first-degree polynomial. The bands made from second-degree polynomials give a narrower width than first-degree polynomials. In contrast, applying optimal smoothing parameters to the model provides different conclusions by using smoothing parameters based on choice. In addition to the differences based on the scatterplot, the bootstrap estimates of the coverage probability are also other. Selecting smoothing parameters based on a particular value provides probability coverage with the paired bootstrap method for the first-degree local polynomial regression is 0.93, while the second-degree local polynomial is 0.96. The probability of coverage based on the residual bootstrap method for the first-degree local polynomial regression is 0.95, while the second-degree local polynomial is 0.96. The probability coverage based on the optimal parameters of the paired bootstrap method for the first-degree local polynomial regression is 0.945, while the second-degree local polynomial is 0.93. The residual bootstrap method gives the first-degree local polynomial regression of 0.95, while the second-degree local polynomial is 0.93. In general, both bootstrap methods work well for estimating prediction confidence intervals
    • …
    corecore