80,361 research outputs found
Hedge fund return predictability; To combine forecasts or combine information?
While the majority of the predictability literature has been devoted to the predictability of traditional asset classes, the literature on the predictability of hedge fund returns is quite scanty. We focus on assessing the out-of-sample predictability of hedge fund strategies by employing an extensive list of predictors. Aiming at reducing uncertainty risk associated with a single predictor model, we first engage into combining the individual forecasts. We consider various combining methods ranging from simple averaging schemes to more sophisticated ones, such as discounting forecast errors, cluster combining and principal components combining. Our second approach combines information of the predictors and applies kitchen sink, bootstrap aggregating (bagging), lasso, ridge and elastic net specifications. Our statistical and economic evaluation findings point to the superiority of simple combination methods. We also provide evidence on the use of hedge fund return forecasts for hedge fund risk measurement and portfolio allocation. Dynamically constructing portfolios based on the combination forecasts of hedge funds returns leads to considerably improved portfolio performance
Use and Communication of Probabilistic Forecasts
Probabilistic forecasts are becoming more and more available. How should they
be used and communicated? What are the obstacles to their use in practice? I
review experience with five problems where probabilistic forecasting played an
important role. This leads me to identify five types of potential users: Low
Stakes Users, who don't need probabilistic forecasts; General Assessors, who
need an overall idea of the uncertainty in the forecast; Change Assessors, who
need to know if a change is out of line with expectatations; Risk Avoiders, who
wish to limit the risk of an adverse outcome; and Decision Theorists, who
quantify their loss function and perform the decision-theoretic calculations.
This suggests that it is important to interact with users and to consider their
goals. The cognitive research tells us that calibration is important for trust
in probability forecasts, and that it is important to match the verbal
expression with the task. The cognitive load should be minimized, reducing the
probabilistic forecast to a single percentile if appropriate. Probabilities of
adverse events and percentiles of the predictive distribution of quantities of
interest seem often to be the best way to summarize probabilistic forecasts.
Formal decision theory has an important role, but in a limited range of
applications
Recommended from our members
An evaluation of the performance of UK real estate forecasters
Given the significance of forecasting in real estate investment decisions, this paper investigates forecast uncertainty and disagreement in real estate market forecasts. It compares the performance of real estate forecasters with non-real estate forecasters. Using the Investment Property Forum (IPF) quarterly survey amongst UK independent real estate forecasters and a similar survey of macro-economic and capital market forecasters, these forecasts are compared with actual performance to assess a number of forecasting issues in the UK over 1999-2004, including forecast error, bias and consensus. The results suggest that both groups are biased, less volatile compared to market returns and inefficient in that forecast errors tend to persist. The strongest finding is that forecasters display the characteristics associated with a consensus indicating herding
The ECB survey of professional forecasters (SPF) – A review after eight years’ experience
Eight years have passed since the European Central Bank (ECB) launched its Survey of Professional Forecasters (SPF). The SPF asks a panel of approximately 75 forecasters located in the European Union (EU) for their short- to longer-term expectations for macroeconomic variables such as euro area inflation, growth and unemployment. This paper provides an initial assessment of the information content of this survey. First, we consider shorter-term (i.e., one- and two-year ahead rolling horizon) forecasts. The analysis suggests that, over the sample period, in common with other private and institutional forecasters, the SPF systematically under-forecast inflation but that there is less evidence of such systematic errors for GDP and unemployment forecasts. However, these findings, which generally hold regardless of whether one considers the aggregate SPF panel or individual responses, should be interpreted with caution given the relatively short sample period available for the analysis. Second, we consider SPF respondents’ assessment of forecast uncertainty using information from heir probability distributions. The results suggest that, particularly at the individual level, SPF respondents do not seem to fully capture the overall level of macroeconomic uncertainty. Moreover, even at the aggregate level, a more sophisticated evaluation of the SPF density forecasts using the probability integral transform largely confirms this assessment. Lastly, we consider longer-term macroeconomic expectations from the SPF, where, as expectations cannot yet be assessed against so few actual realisations, we provide a mainly qualitative assessment. With regard to inflation, the study suggests that the ECB has been successful at anchoring longterm expectations at rates consistent with its primary objective to ensure price stability over the medium term. Long-term GDP expectations – which should provide an indication of the private sector’s assessment of potential growth – have declined over the sample period and the balance of risks reported by respondents has generally been skewed to the downside.
Reduced perplexity: Uncertainty measures without entropy
Conference paper presented at Recent Advances in Info-Metrics, Washington, DC, 2014. Under review for a book chapter in "Recent innovations in info-metrics: a cross-disciplinary perspective on information and information processing" by Oxford University Press.A simple, intuitive approach to the assessment of probabilistic inferences is introduced. The Shannon information metrics are translated to the probability domain. The translation shows that the negative logarithmic score and the geometric mean are equivalent measures of the accuracy of a probabilistic inference. Thus there is both a quantitative reduction in perplexity as good inference algorithms reduce the uncertainty and a qualitative reduction due to the increased clarity between the original set of inferences and their average, the geometric mean. Further insight is provided by showing that the Renyi and Tsallis entropy functions translated to the probability domain are both the weighted generalized mean of the distribution. The generalized mean of probabilistic inferences forms a Risk Profile of the performance. The arithmetic mean is used to measure the decisiveness, while the -2/3 mean is used to measure the robustness
Recommended from our members
Analysing UK real estate market forecast disagreement
Given the significance of forecasting in real estate investment decisions, this paper investigates forecast uncertainty and disagreement in real estate market forecasts. Using the Investment Property Forum (IPF) quarterly survey amongst UK independent real estate forecasters, these real estate forecasts are compared with actual real estate performance to assess a number of real estate forecasting issues in the UK over 1999-2004, including real estate forecast error, bias and consensus. The results suggest that real estate forecasts are biased, less volatile compared to market returns and inefficient in that forecast errors tend to persist. The strongest finding is that real estate forecasters display the characteristics associated with a consensus indicating herding
- …