858 research outputs found
Best-fit quasi-equilibrium ensembles: a general approach to statistical closure of underresolved Hamiltonian dynamics
A new method of deriving reduced models of Hamiltonian dynamical systems is
developed using techniques from optimization and statistical estimation. Given
a set of resolved variables that define a model reduction, the
quasi-equilibrium ensembles associated with the resolved variables are employed
as a family of trial probability densities on phase space. The residual that
results from submitting these trial densities to the Liouville equation is
quantified by an ensemble-averaged cost function related to the information
loss rate of the reduction. From an initial nonequilibrium state, the
statistical state of the system at any later time is estimated by minimizing
the time integral of the cost function over paths of trial densities.
Statistical closure of the underresolved dynamics is obtained at the level of
the value function, which equals the optimal cost of reduction with respect to
the resolved variables, and the evolution of the estimated statistical state is
deduced from the Hamilton-Jacobi equation satisfied by the value function. In
the near-equilibrium regime, or under a local quadratic approximation in the
far-from-equilibrium regime, this best-fit closure is governed by a
differential equation for the estimated state vector coupled to a Riccati
differential equation for the Hessian matrix of the value function. Since
memory effects are not explicitly included in the trial densities, a single
adjustable parameter is introduced into the cost function to capture a
time-scale ratio between resolved and unresolved motions. Apart from this
parameter, the closed equations for the resolved variables are completely
determined by the underlying deterministic dynamics
Quasi maximum likelihood estimation for strongly mixing state space models and multivariate L\'evy-driven CARMA processes
We consider quasi maximum likelihood (QML) estimation for general
non-Gaussian discrete-ime linear state space models and equidistantly observed
multivariate L\'evy-driven continuoustime autoregressive moving average
(MCARMA) processes. In the discrete-time setting, we prove strong consistency
and asymptotic normality of the QML estimator under standard moment assumptions
and a strong-mixing condition on the output process of the state space model.
In the second part of the paper, we investigate probabilistic and analytical
properties of equidistantly sampled continuous-time state space models and
apply our results from the discrete-time setting to derive the asymptotic
properties of the QML estimator of discretely recorded MCARMA processes. Under
natural identifiability conditions, the estimators are again consistent and
asymptotically normally distributed for any sampling frequency. We also
demonstrate the practical applicability of our method through a simulation
study and a data example from econometrics
Recommended from our members
Information, VARs and DSGE Models
How informative is a time series representation of a given vector of observables about the structural shocks and impulse response functions in a DSGE model? In this paper we refer to this econometricianās problem as āE-invertibilityā and consider the corresponding information problem of the agents in the assumed DGP, the DSGE model, which we refer to as āA-invertibilityā We consider how the general nature of the agentsā signal extraction problem under imperfect information impacts on the econometricianās problem of attempting to infer the nature of structural shocks and associated impulse responses from the data. We also examine a weaker condition of recoverability. A general conclusion is that validating a DSGE model by comparing its impulse response functions with those of a data VAR is more problematic when we drop the common assumption in the literature that agents have perfect information as an endowment. We develop measures of approximate fundamentalness for both perfect and imperfect information cases and illustrate our results using analytical and numerical examples
Imperfection Information, Optimal Monetary Policy and Informational Consistency
This paper examines the implications of imperfect information (II) for optimal monetary policy with a consistent set of informational assumptions for the modeller and the private sector an assumption we term the informational consistency. We use an estimated simple NK model from Levine et al. (2012), where the assumption of symmetric II significantly improves the fit of the model to US data to assess the welfare costs of II under commitment, discretion and simple Taylor-type rules. Our main results are: first, common to all information sets we find significant welfare gains from commitment only with a zero-lower bound constraint on the interest rate. Second, optimized rules take the form of a price level rule, or something very close across all information cases. Third, the combination of limited information and a lack of commitment can be particularly serious for welfare. At the same time we find that II with lags introduces a ātying ones handsā effect on the policymaker that may improve welfare under discretion. Finally, the impulse response functions under our most extreme imperfect information assumption (output and inflation observed with a two-quarter delay) exhibit hump-shaped behaviour and the fiscal multiplier is significantly enhanced in this case
A globally convergent matricial algorithm for multivariate spectral estimation
In this paper, we first describe a matricial Newton-type algorithm designed
to solve the multivariable spectrum approximation problem. We then prove its
global convergence. Finally, we apply this approximation procedure to
multivariate spectral estimation, and test its effectiveness through
simulation. Simulation shows that, in the case of short observation records,
this method may provide a valid alternative to standard multivariable
identification techniques such as MATLAB's PEM and MATLAB's N4SID
An optimization principle for deriving nonequilibrium statistical models of Hamiltonian dynamics
A general method for deriving closed reduced models of Hamiltonian dynamical
systems is developed using techniques from optimization and statistical
estimation. As in standard projection operator methods, a set of resolved
variables is selected to capture the slow, macroscopic behavior of the system,
and the family of quasi-equilibrium probability densities on phase space
corresponding to these resolved variables is employed as a statistical model.
The macroscopic dynamics of the mean resolved variables is determined by
optimizing over paths of these probability densities. Specifically, a cost
function is introduced that quantifies the lack-of-fit of such paths to the
underlying microscopic dynamics; it is an ensemble-averaged, squared-norm of
the residual that results from submitting a path of trial densities to the
Liouville equation. The evolution of the macrostate is estimated by minimizing
the time integral of the cost function. The value function for this
optimization satisfies the associated Hamilton-Jacobi equation, and it
determines the optimal relation between the statistical parameters and the
irreversible fluxes of the resolved variables, thereby closing the reduced
dynamics. The resulting equations for the macroscopic variables have the
generic form of governing equations for nonequilibrium thermodynamics, and they
furnish a rational extension of the classical equations of linear irreversible
thermodynamics beyond the near-equilibrium regime. In particular, the value
function is a thermodynamic potential that extends the classical dissipation
function and supplies the nonlinear relation between thermodynamics forces and
fluxes
Further Model-Based Estimates of U.S. Total Manufacturing Production Capital and Technology, 1949-2005
Production capital and technology (i.e., total factor productivity) in U.S. manufacturing are fundamental for understanding output and productivity growth of the U.S. economy but are unobserved at this level of aggregation and must be estimated before being used in empirical analysis. Previously, we developed a method for estimating production capital and technology based on an estimated dynamic structural economic model and applied the method using annual SIC data for 1947-1997 to estimate production capital and technology in U.S. total manufacturing. In this paper, we update this work by reestimating the model and production capital and technology using annual SIC data for 1949-2001 and partly overlapping NAICS data for 1987-2005.Kalman filter estimation of latent variables
Estimated U.S. Manufacturing Production Capital and Technology Based on an Estimated Dynamic Structural Economic Model
Production capital and total factor productivity or technology are fundamental to understanding output and productivity growth, but are unobserved except at disaggregated levels and must be estimated before being used in empirical analysis. In this paper, we develop estimates of production capital and technology for U.S. total manufacturing based on an estimated dynamic structural economic model. First, using annual U.S. total manufacturing data for 1947-1997, we estimate by maximum likelihood a dynamic structural economic model of a representative production firm. In the estimation, capital and technology are completely unobserved or latent variables. Then, we apply the Kalman filter to the estimated model and the data to compute estimates of model-based capital and technology for the sample. Finally, we describe and evaluate similarities and differences between the model-based and standard estimates of capital and technology reported by the Bureau of Labor Statistics.Kalman filter estimation of latent variables
Endogenous Persistence in an Estimated DSGE Model under Imperfect Information
We provide a tool for estimating DSGE models by BayesianMaximum-likelihood methods under very general information assumptions. This framework is applied to a New Keynesian model where we compare the standard approach, that assumes an informational asymmetry between private agents and the econometrician, with an assumption of informational symmetry. For the former, private agents observe all state variables including shocks, whereas the econometrician uses only data for output, inflation and interest rates. For the latter both agents have the same imperfect information set and this corresponds to what we term the 'informational consistency principle'. We first assume rational expectations and then generalize the model to allow some households and firms to form expectations adaptively. We find that in terms of model posterior probabilities, impulse responses, second moments and autocorrelations, the assumption of informational symmetry by rational agents significantly improves the model fit. We also find qualified empirical support for the heterogenous expectations model. JEL Classification: C11, C52, E12, E32.Imperfect Information, DSGE Model, Rational versus Adaptive Expectations, Bayesian Estimation
Dual estimation of the poles and zeros of an ARMA(p,q) process
"September 1985."Bibliography: p. 33-34.Army Research Office Contract DAAG-29-84-K-0005M. Isabel Ribeiro, Jose M.F. Moura
- ā¦