46,957 research outputs found
Recommended from our members
Econometrics: A bird's eye view
As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly over the past few decades. Major advances have taken place in the analysis of cross sectional data by means of semi-parametric and non-parametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledge and attempts have been made to take them into account either by integrating out their effects or by modeling the sources of heterogeneity when suitable panel data exists. The counterfactual considerations that underlie policy analysis and treatment evaluation have been given a more satisfactory foundation. New time series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Non-linear econometric techniques are used increasingly in the analysis of cross section and time series observations. Applications of Bayesian techniques to econometric problems have been given new impetus largely thanks to advances in computer power and computational techniques. The use of Bayesian techniques have in turn provided the investigators with a unifying framework where the tasks and forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process; thus paving the way for establishing the foundation of the "real time econometrics". This paper attempts to provide an overview of some of these developments
Smoothing and mean-covariance estimation of functional data with a Bayesian hierarchical model
Functional data, with basic observational units being functions (e.g.,
curves, surfaces) varying over a continuum, are frequently encountered in
various applications. While many statistical tools have been developed for
functional data analysis, the issue of smoothing all functional observations
simultaneously is less studied. Existing methods often focus on smoothing each
individual function separately, at the risk of removing important systematic
patterns common across functions. We propose a nonparametric Bayesian approach
to smooth all functional observations simultaneously and nonparametrically. In
the proposed approach, we assume that the functional observations are
independent Gaussian processes subject to a common level of measurement errors,
enabling the borrowing of strength across all observations. Unlike most
Gaussian process regression models that rely on pre-specified structures for
the covariance kernel, we adopt a hierarchical framework by assuming a Gaussian
process prior for the mean function and an Inverse-Wishart process prior for
the covariance function. These prior assumptions induce an automatic
mean-covariance estimation in the posterior inference in addition to the
simultaneous smoothing of all observations. Such a hierarchical framework is
flexible enough to incorporate functional data with different characteristics,
including data measured on either common or uncommon grids, and data with
either stationary or nonstationary covariance structures. Simulations and real
data analysis demonstrate that, in comparison with alternative methods, the
proposed Bayesian approach achieves better smoothing accuracy and comparable
mean-covariance estimation results. Furthermore, it can successfully retain the
systematic patterns in the functional observations that are usually neglected
by the existing functional data analyses based on individual-curve smoothing.Comment: Submitted to Bayesian Analysi
Semiparametric Bayesian inference in multiple equation models
This paper outlines an approach to Bayesian semiparametric regression in multiple equation models which can be used to carry out inference in seemingly unrelated regressions or simultaneous equations models with nonparametric components. The approach treats the points on each nonparametric regression line as unknown parameters and uses a prior on the degree of smoothness of each line to ensure valid posterior inference despite the fact that the number of parameters is greater than the number of observations. We develop an empirical Bayesian approach that allows us to estimate the prior smoothing hyperparameters from the data. An advantage of our semiparametric model is that it is written as a seemingly unrelated regressions model with independent normal-Wishart prior. Since this model is a common one, textbook results for posterior inference, model comparison, prediction and posterior computation are immediately available. We use this model in an application involving a two-equation structural model drawn from the labour and returns to schooling literatures
Hyperparameter Estimation in Bayesian MAP Estimation: Parameterizations and Consistency
The Bayesian formulation of inverse problems is attractive for three primary
reasons: it provides a clear modelling framework; means for uncertainty
quantification; and it allows for principled learning of hyperparameters. The
posterior distribution may be explored by sampling methods, but for many
problems it is computationally infeasible to do so. In this situation maximum a
posteriori (MAP) estimators are often sought. Whilst these are relatively cheap
to compute, and have an attractive variational formulation, a key drawback is
their lack of invariance under change of parameterization. This is a
particularly significant issue when hierarchical priors are employed to learn
hyperparameters. In this paper we study the effect of the choice of
parameterization on MAP estimators when a conditionally Gaussian hierarchical
prior distribution is employed. Specifically we consider the centred
parameterization, the natural parameterization in which the unknown state is
solved for directly, and the noncentred parameterization, which works with a
whitened Gaussian as the unknown state variable, and arises when considering
dimension-robust MCMC algorithms; MAP estimation is well-defined in the
nonparametric setting only for the noncentred parameterization. However, we
show that MAP estimates based on the noncentred parameterization are not
consistent as estimators of hyperparameters; conversely, we show that limits of
finite-dimensional centred MAP estimators are consistent as the dimension tends
to infinity. We also consider empirical Bayesian hyperparameter estimation,
show consistency of these estimates, and demonstrate that they are more robust
with respect to noise than centred MAP estimates. An underpinning concept
throughout is that hyperparameters may only be recovered up to measure
equivalence, a well-known phenomenon in the context of the Ornstein-Uhlenbeck
process.Comment: 36 pages, 8 figure
Multimodal nested sampling: an efficient and robust alternative to MCMC methods for astronomical data analysis
In performing a Bayesian analysis of astronomical data, two difficult
problems often emerge. First, in estimating the parameters of some model for
the data, the resulting posterior distribution may be multimodal or exhibit
pronounced (curving) degeneracies, which can cause problems for traditional
MCMC sampling methods. Second, in selecting between a set of competing models,
calculation of the Bayesian evidence for each model is computationally
expensive. The nested sampling method introduced by Skilling (2004), has
greatly reduced the computational expense of calculating evidences and also
produces posterior inferences as a by-product. This method has been applied
successfully in cosmological applications by Mukherjee et al. (2006), but their
implementation was efficient only for unimodal distributions without pronounced
degeneracies. Shaw et al. (2007), recently introduced a clustered nested
sampling method which is significantly more efficient in sampling from
multimodal posteriors and also determines the expectation and variance of the
final evidence from a single run of the algorithm, hence providing a further
increase in efficiency. In this paper, we build on the work of Shaw et al. and
present three new methods for sampling and evidence evaluation from
distributions that may contain multiple modes and significant degeneracies; we
also present an even more efficient technique for estimating the uncertainty on
the evaluated evidence. These methods lead to a further substantial improvement
in sampling efficiency and robustness, and are applied to toy problems to
demonstrate the accuracy and economy of the evidence calculation and parameter
estimation. Finally, we discuss the use of these methods in performing Bayesian
object detection in astronomical datasets.Comment: 14 pages, 11 figures, submitted to MNRAS, some major additions to the
previous version in response to the referee's comment
- …