235,296 research outputs found

    Bayesian long-run prediction in time series models

    Get PDF
    This paper considers Bayesian long-run prediction in time series models. We allow time series to exhibit stationary or non-stationary behavior and show how differences between prior structures which have little effect on posterior inferences can have a large effect in a prediction exercise. In particular, the Jeffreys' prior given in Phillips (1991) is seen to prevent the existence of one-period ahead predictive moments. A Bayesian counterpart is provided to Sampson (1991) who takes parameter uncertainty into account in a classical framework. An empirical example illustrates our results

    Prediction distribution for linear regression model with multivariate Student-t errors under the Bayesian approach

    Get PDF
    [Abstract]: Prediction distribution is a basis for predictive inferences applied in many real world situations. It is a distribution of the unobserved future response(s) conditional on a set of realized responses from an informative experiment. Various statistical approaches can be used to obtain prediction distributions for different models. This study derives the prediction distribution(s) for multiple linear regression model using the Bayesian method when the error components of both the performed and future models have a multivariate Student-t distribution. The study observes that the prediction distribution(s) of future response(s) has a multivariate Student-t distribution whose degrees of freedom depends on the size of the realized sample and the dimension of the regression parameters’ vector but does not depend on the degrees of freedom of the errors distribution

    Predictive PAC Learning and Process Decompositions

    Full text link
    We informally call a stochastic process learnable if it admits a generalization error approaching zero in probability for any concept class with finite VC-dimension (IID processes are the simplest example). A mixture of learnable processes need not be learnable itself, and certainly its generalization error need not decay at the same rate. In this paper, we argue that it is natural in predictive PAC to condition not on the past observations but on the mixture component of the sample path. This definition not only matches what a realistic learner might demand, but also allows us to sidestep several otherwise grave problems in learning from dependent data. In particular, we give a novel PAC generalization bound for mixtures of learnable processes with a generalization error that is not worse than that of each mixture component. We also provide a characterization of mixtures of absolutely regular (β\beta-mixing) processes, of independent probability-theoretic interest.Comment: 9 pages, accepted in NIPS 201
    • …
    corecore