224,781 research outputs found

    How useful are historical data for forecasting the long-run equity return distribution?

    Get PDF
    We provide an approach to forecasting the long-run (unconditional) distribution of equity returns making optimal use of historical data in the presence of structural breaks. Our focus is on learning about breaks in real time and assessing their impact on out-of-sample density forecasts. Forecasts use a probability-weighted average of submodels, each of which is estimated over a different history of data. The paper illustrates the importance of uncertainty about structural breaks and the value of modeling higher-order moments of excess returns when forecasting the return distribution and its moments. The shape of the long-run distribution and the dynamics of the higher-order moments are quite different from those generated by forecasts which cannot capture structural breaks. The empirical results strongly reject ignoring structural change in favor of our forecasts which weight historical data to accommodate uncertainty about structural breaks. We also strongly reject the common practice of using a fixed-length moving window. These differences in long-run forecasts have implications for many financial decisions, particularly for risk management and long-run investment decisions.density forecasts, structural change, model risk, parameter uncertainty, Bayesian learning, market returns

    How useful are historical data for forecasting the long-run equity return distribution?

    Get PDF
    We provide an approach to forecasting the long-run (unconditional) distribution of equity returns making optimal use of historical data in the presence of structural breaks. Our focus is on learning about breaks in real time and assessing their impact on out-of-sample density forecasts. Forecasts use a probability-weighted average of submodels, each of which is estimated over a different historyof data. The paper illustrates the importance of uncertainty about structural breaks and the value of modeling higher-order moments of excess returns when forecasting the return distribution and its moments. The shape of the long-run distribution and the dynamics of the higher-order moments are quite different from those generated by forecasts which cannot capture structural breaks. The empirical results strongly reject ignoring structural change in favor of our forecasts which weight historical data to accommodate uncertainty about structural breaks. We also strongly reject the common practice of using a fixed-length moving window. These differences in long-run forecasts have implications for many financial decisions, particularly for risk management and long-run investment decisions.density forecasts, structural change, model risk, parameter uncertainty, Bayesian learning, market returns

    Consistency of plug-in confidence sets for classification in semi-supervised learning

    Full text link
    Confident prediction is highly relevant in machine learning; for example, in applications such as medical diagnoses, wrong prediction can be fatal. For classification, there already exist procedures that allow to not classify data when the confidence in their prediction is weak. This approach is known as classification with reject option. In the present paper, we provide new methodology for this approach. Predicting a new instance via a confidence set, we ensure an exact control of the probability of classification. Moreover, we show that this methodology is easily implementable and entails attractive theoretical and numerical properties

    Classifiers With a Reject Option for Early Time-Series Classification

    Full text link
    Early classification of time-series data in a dynamic environment is a challenging problem of great importance in signal processing. This paper proposes a classifier architecture with a reject option capable of online decision making without the need to wait for the entire time series signal to be present. The main idea is to classify an odor/gas signal with an acceptable accuracy as early as possible. Instead of using posterior probability of a classifier, the proposed method uses the "agreement" of an ensemble to decide whether to accept or reject the candidate label. The introduced algorithm is applied to the bio-chemistry problem of odor classification to build a novel Electronic-Nose called Forefront-Nose. Experimental results on wind tunnel test-bed facility confirms the robustness of the forefront-nose compared to the standard classifiers from both earliness and recognition perspectives
    corecore