730 research outputs found
Optimal Prediction Pools
A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components. JEL Classification: C11, C53forecasting, GARCH, log scoring, Markov mixture, model combination
Measuring output gap uncertainty
We propose a methodology for producing density forecasts for the output gap in real time using a large number of vector autoregessions in inflation and output gap measures. Density combination utilizes a linear mixture of experts framework to produce potentially non-Gaussian ensemble densities for the unobserved output gap. In our application, we show that data revisions alter substantially our probabilistic assessments of the output gap using a variety of output gap measures derived from univariate detrending filters. The resulting ensemble produces well-calibrated forecast densities for US inflation in real time, in contrast to those from simple univariate autoregressions which ignore the contribution of the output gap. Combining evidence from both linear trends and more flexible univariate detrending filters induces strong multi-modality in the predictive densities for the unobserved output gap. The peaks associated with these two detrending methodologies indicate output gaps of opposite sign for some observations, reflecting the pervasive nature of model uncertainty in our US data
Optimal Prediction Pools
A prediction model is any statement of a probability distribution for an outcome not yet observed. This study considers the properties of weighted linear combinations of n prediction models, or linear pools, evaluated using the conventional log predictive scoring rule. The log score is a concave function of the weights and, in general, an optimal linear combination will include several models with positive weights despite the fact that exactly one model has limiting posterior probability one. The paper derives several interesting formal results: for example, a prediction model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with prediction models from the ARCH, stochastic volatility and Markov mixture families. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools, and these pools substantially outperform their best components.forecasting; GARCH; log scoring; Markov mixture; model combination; S&P 500 returns; stochastic volatility
Multivariate Bayesian Predictive Synthesis in Macroeconomic Forecasting
We develop the methodology and a detailed case study in use of a class of
Bayesian predictive synthesis (BPS) models for multivariate time series
forecasting. This extends the recently introduced foundational framework of BPS
to the multivariate setting, with detailed application in the topical and
challenging context of multi-step macroeconomic forecasting in a monetary
policy setting. BPS evaluates-- sequentially and adaptively over time-- varying
forecast biases and facets of miscalibration of individual forecast densities,
and-- critically-- of time-varying inter-dependencies among them over multiple
series. We develop new BPS methodology for a specific subclass of the dynamic
multivariate latent factor models implied by BPS theory. Structured dynamic
latent factor BPS is here motivated by the application context-- sequential
forecasting of multiple US macroeconomic time series with forecasts generated
from several traditional econometric time series models. The case study
highlights the potential of BPS to improve of forecasts of multiple series at
multiple forecast horizons, and its use in learning dynamic relationships among
forecasting models or agents
What the "Equal Weight View" is
Dawid, DeGroot and Mortera showed, a quarter century ago, that any agent who regards a fellow agent as a peer--in particular, defers to the fellow agent's prior credences in the same way that she defers to her own--and updates by split-the-difference is prone (on pain of triviality) to diachronic incoherence. On the other hand one may show that there are special scenarios in which Bayesian updating approximates difference splitting, so it remains an important question whether it remains a viable (approximate) response to ``generic" peer update. We critique arguments by two teams of philosophers (Fitelson & Jehle and Nissan-Rozen & Spectre) against this updating scheme, then suggest an alternative ``Equal Weight'' response to cases of peer disagreement
Resolving conflicts between statistical methods by probability combination: Application to empirical Bayes analyses of genomic data
In the typical analysis of a data set, a single method is selected for
statistical reporting even when equally applicable methods yield very different
results. Examples of equally applicable methods can correspond to those of
different ancillary statistics in frequentist inference and of different prior
distributions in Bayesian inference. More broadly, choices are made between
parametric and nonparametric methods and between frequentist and Bayesian
methods.
Rather than choosing a single method, it can be safer, in a game-theoretic
sense, to combine those that are equally appropriate in light of the available
information. Since methods of combining subjectively assessed probability
distributions are not objective enough for that purpose, this paper introduces
a method of distribution combination that does not require any assignment of
distribution weights. It does so by formalizing a hedging strategy in terms of
a game between three players: nature, a statistician combining distributions,
and a statistician refusing to combine distributions. The optimal move of the
first statistician reduces to the solution of a simpler problem of selecting an
estimating distribution that minimizes the Kullback-Leibler loss maximized over
the plausible distributions to be combined. The resulting combined distribution
is a linear combination of the most extreme of the distributions to be combined
that are scientifically plausible. The optimal weights are close enough to each
other that no extreme distribution dominates the others.
The new methodology is illustrated by combining conflicting empirical Bayes
methodologies in the context of gene expression data analysis
Information-based fitness and the emergence of criticality in living systems
Empirical evidence suggesting that living systems might operate in the
vicinity of critical points, at the borderline between order and disorder, has
proliferated in recent years, with examples ranging from spontaneous brain
activity to flock dynamics. However, a well-founded theory for understanding
how and why interacting living systems could dynamically tune themselves to be
poised in the vicinity of a critical point is lacking. Here we employ tools
from statistical mechanics and information theory to show that complex adaptive
or evolutionary systems can be much more efficient in coping with diverse
heterogeneous environmental conditions when operating at criticality.
Analytical as well as computational evolutionary and adaptive models vividly
illustrate that a community of such systems dynamically self-tunes close to a
critical state as the complexity of the environment increases while they remain
non-critical for simple and predictable environments. A more robust convergence
to criticality emerges in co-evolutionary and co-adaptive set-ups in which
individuals aim to represent other agents in the community with fidelity,
thereby creating a collective critical ensemble and providing the best possible
trade-off between accuracy and flexibility. Our approach provides a
parsimonious and general mechanism for the emergence of critical-like behavior
in living systems needing to cope with complex environments or trying to
efficiently coordinate themselves as an ensemble.Comment: Main text: 9 pages, 3 figures. Supplementary Information: 22 pages,
14 figures, 3 video
Nonequilibrium quantum-impurities: from entropy production to information theory
Nonequilibrium steady-state currents, unlike their equilibrium counterparts,
continuously dissipate energy into their physical surroundings leading to
entropy production and time-reversal symmetry breaking. This letter discusses
these issues in the context of quantum impurity models driven out of
equilibrium by attaching the impurity to leads at different chemical potentials
and temperatures. We start by pointing out that entropy production is often
hidden in traditional treatments of quantum-impurity models. We then use simple
thermodynamic arguments to define the rate of entropy production. Using the
scattering framework recently developed by the authors we show that the rate of
entropy production has a simple information theoretic interpretation in terms
of the Shannon entropy and Kullback-Leibler divergence of nonequilibrium
distribution function. This allows us to show that the entropy production is
strictly positive for any nonequilibrium steady-state. We conclude by applying
these ideas to the Resonance Level Model and the Kondo model.Comment: 5 pages, 1 figure new version with minor clarification
- …