4,347,786 research outputs found
Optimal predictive model selection
Often the goal of model selection is to choose a model for future prediction,
and it is natural to measure the accuracy of a future prediction by squared
error loss. Under the Bayesian approach, it is commonly perceived that the
optimal predictive model is the model with highest posterior probability, but
this is not necessarily the case. In this paper we show that, for selection
among normal linear models, the optimal predictive model is often the median
probability model, which is defined as the model consisting of those variables
which have overall posterior probability greater than or equal to 1/2 of being
in a model. The median probability model often differs from the highest
probability model
Model fit and model selection
This paper uses an example to show that a model that fits the available data perfectly may provide worse answers to policy questions than an alternative, imperfectly fitting model. The author argues that, in the context of Bayesian estimation, this result can be interpreted as being due to the use of an inappropriate prior over the parameters of shock processes. He urges the use of priors that are obtained from explicit auxiliary information, not from the desire to obtain identification.Econometric models ; Macroeconomics
Bootstrap for neural model selection
Bootstrap techniques (also called resampling computation techniques) have
introduced new advances in modeling and model evaluation. Using resampling
methods to construct a series of new samples which are based on the original
data set, allows to estimate the stability of the parameters. Properties such
as convergence and asymptotic normality can be checked for any particular
observed data set. In most cases, the statistics computed on the generated data
sets give a good idea of the confidence regions of the estimates. In this
paper, we debate on the contribution of such methods for model selection, in
the case of feedforward neural networks. The method is described and compared
with the leave-one-out resampling method. The effectiveness of the bootstrap
method, versus the leave-one-out methode, is checked through a number of
examples.Comment: A la suite de la conf\'{e}rence ESANN 200
Model selection for amplitude analysis
Model complexity in amplitude analyses is often a priori under-constrained
since the underlying theory permits a large number of possible amplitudes to
contribute to most physical processes. The use of an overly complex model
results in reduced predictive power and worse resolution on unknown parameters
of interest. Therefore, it is common to reduce the complexity by removing from
consideration some subset of the allowed amplitudes. This paper studies a
method for limiting model complexity from the data sample itself through
regularization during regression in the context of a multivariate (Dalitz-plot)
analysis. The regularization technique applied greatly improves the
performance. An outline of how to obtain the significance of a resonance in a
multivariate amplitude analysis is also provided
- …
