19 research outputs found

    Results on M3-Competition data at first six forecasting horizons.

    No full text
    <p>Results on M3-Competition data at first six forecasting horizons.</p

    Box-and-whisker plot and kernel density estimates for the relative absolute errors used by MRAE and GMRAE (log-scale, forecasts with zero or undefined error excluded).

    No full text
    <p>Box-and-whisker plot and kernel density estimates for the relative absolute errors used by MRAE and GMRAE (log-scale, forecasts with zero or undefined error excluded).</p

    Box-and-whisker plot and kernel density estimates for the absolute errors used by MAE.

    No full text
    <p>Box-and-whisker plot and kernel density estimates for the absolute errors used by MAE.</p

    Box-and-whisker plot and kernel density estimates for the bounded relative absolute errors used by UMBRAE (using the naïve errors as the benchmark).

    No full text
    <p>Box-and-whisker plot and kernel density estimates for the bounded relative absolute errors used by UMBRAE (using the naïve errors as the benchmark).</p

    Evaluation on the symmetry of accuracy measures to over-estimates and under-estimates.

    No full text
    <p>A: Synthetic time series data where <i>Y</i><sub><i>t</i></sub> is the target series and are forecasts. makes a 10% over-estimate to all observations of <i>Y</i><sub><i>t</i></sub>, while makes a 10% under-estimate. B: Results of symmetric evaluation, which shows UMBRAE and all other accuracy measures except sMAPE are symmetric.</p

    Evaluation on the scale dependency of accuracy measures.

    No full text
    <p>A: Synthetic time series data where <i>Y</i><sub><i>t</i></sub> is the target series and are forecasts. and have the same mean absolute error, but errors are on different percentage scales to the corresponding values of <i>Y</i><sub><i>t</i></sub>. B: Results of scale dependency evaluation, where MAE, RMSE, MASE and even GMRAE show no difference between and . MRAE and MAPE produce substantially different errors for the two cases. sMAPE and UMBRAE can reasonably distinguish the two forecasts.</p

    Spearman’s rank correlation coefficient of the rankings in Table 1.

    No full text
    <p>Spearman’s rank correlation coefficient of the rankings in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0174202#pone.0174202.t001" target="_blank">Table 1</a>.</p

    Box-and-whisker plot and kernel density estimates for the absolute scaled errors used by MASE.

    No full text
    <p>Box-and-whisker plot and kernel density estimates for the absolute scaled errors used by MASE.</p

    Evaluation on the resistance of accuracy measures to a single forecasting outlier.

    No full text
    <p>A: Synthetic time series data where <i>Y</i><sub><i>t</i></sub> is the target series and are forecasts. The only difference between is their forecasts on the observation <i>Y</i><sub>8</sub>. B: Results of single forecasting outlier evaluation, which shows UMBRAE is less sensitive than other measures to a single forecasting outlier.</p

    Box-and-whisker plot and kernel density estimates for the absolute scaled errors used by AvgRelMAE (log-scale).

    No full text
    <p>Box-and-whisker plot and kernel density estimates for the absolute scaled errors used by AvgRelMAE (log-scale).</p
    corecore