44,056 research outputs found
New in-sample prediction errors in time series with applications
^aThis article introduces two new types of prediction errors in time series: the filtered prediction errors and the deletion prediction errors. These two prediction errors are obtained in the same sample used for estimation, but in such a way that they share some common properties with out of sample prediction errors. It is proved that the filtered prediction errors are uncorrelated, up to terms of magnitude order O(T^-2), with the in sample innovations, a property that share with the out-of-sample prediction errors. On the other hand, deletion prediction errors assume that the values to be predicted are unobserved, a property that they also share with out-of-sample prediction errors. It is shown that these prediction errors can be computed with parameters estimated by assuming innovative or additive outliers, respectively, at the points to be predicted. Then the prediction errors are obtained by running the procedure for all the points in the sample of data. Two applications of these new prediction errors are presented. The first is the estimation and comparison of the prediction mean squared errors of competing predictors. The second is the determination of the order of an ARMA model. In the two applications the proposed filtered prediction errors have some advantages over alternative existing methods.
NEW IN-SAMPLE PREDICTION ERRORS IN TIME SERIES WITH APPLICATIONS
This article introduces two new types of prediction errors in time series: the filtered prediction errors and the deletion prediction errors. These two prediction errors are obtained in the same sample used for estimation, but in such a way that they share some common properties with out of sample prediction errors. It is proved that the filtered prediction errors are uncorrelated, up to terms of magnitude order O(T-2), with the in sample innovations, a property that share with the out-of-sample prediction errors. On the other hand, deletion prediction errors assume that the values to be predicted are unobserved, a property that they also share with out-of-sample prediction errors. It is shown that these prediction errors can be computed with parameters estimated by assuming innovative or additive outliers, respectively, at the points to be predicted. Then the prediction errors are obtained by running the procedure for all the points in the sample of data. Two applications of these new prediction errors are presented. The first is the estimation and comparison of the prediction mean squared errors of competing predictors. The second is the determination of the order of an ARMA model. In the two applications the proposed filtered prediction errors have some advantages over alternative existing methods..
A note on prediction and interpolation errors in time series
In this note we analyze the relationship between one-step ahead prediction errors and interpolation errors in time series. We obtain an expression of the prediction errors in terms of the interpolation errors and then we show that minimizing the sum of squares of the one step-ahead standardized prediction errors is equivalent to minimizing the sum of squares of standardized interpolation errors
A NOTE ON PREDICTION AND INTERPOLATION ERRORS IN TIME SERIES
In this note we analyze the relationship between one-step ahead prediction errors and interpolation errors in time series. We obtain an expression of the prediction errors in terms of the interpolation errors and then we show that minimizing the sum of squares of the one step-ahead standardized prediction errors is equivalent to minimizing the sum of squares of standardized interpolation errors.
Algorithmic Complexity Bounds on Future Prediction Errors
We bound the future loss when predicting any (computably) stochastic sequence
online. Solomonoff finitely bounded the total deviation of his universal
predictor from the true distribution by the algorithmic complexity of
. Here we assume we are at a time and already observed .
We bound the future prediction performance on by a new
variant of algorithmic complexity of given , plus the complexity of the
randomness deficiency of . The new complexity is monotone in its condition
in the sense that this complexity can only decrease if the condition is
prolonged. We also briefly discuss potential generalizations to Bayesian model
classes and to classification problems.Comment: 21 page
Dopamine restores reward prediction errors in old age.
Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. Using functional magnetic resonance imaging in humans, we found that healthy older adults had an abnormal signature of expected value, resulting in an incomplete reward prediction error (RPE) signal in the nucleus accumbens, a brain region that receives rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum, measured by diffusion tensor imaging, was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to the level of young adults. This drug effect was linked to restoration of a canonical neural RPE. Our results identify a neurochemical signature underlying abnormal reward processing in older adults and indicate that this can be modulated by L-DOPA
Physician Income Prediction Errors: Sources and Implications for Behavior
Although income expectations play a central role in many economic decisions, little is known about the sources of income prediction errors and how agents respond to income shocks. This paper uses a unique panel data set to examine the accuracy of physicians' income expectations, the sources of income prediction errors, and the effect of income prediction errors on physician behavior. The data set contains direct survey measures of income expectations for medical students who graduated between 1970 and 1998, their corresponding income realizations, and a rich summary of the shocks hitting their medical practices. We find that income prediction errors were positive on average over the sample period, but varied significantly over time and cross-sectionally. We trace these results to persistent specialty-specific shocks, such as the growth of health maintenance organizations (HMOs) and other changes across health care markets. Physicians who experienced negative income shocks were more likely to respond by increasing their hours worked, allocating fewer of their work hours to teaching/research and more to patient care, and were more likely to switch specialties.
- …