3,126 research outputs found
On the Scientific Status of Economic Policy: A Tale of Alternative Paradigms
In the last years, a number of contributions has argued that monetary -- and, more generally, economic -- policy is finally becoming more of a science. According to these authors, policy rules implemented by central banks are nowadays well supported by a theoretical framework (the New Neoclassical Synthesis) upon which a general consensus has emerged in the economic profession. In other words, scientific discussion on economic policy seems to be ultimately confined to either fine-tuning this "consensus" model, or assessing the extent to which "elements of art" still exist in the conduct of monetary policy. In this paper, we present a substantially opposite view, rooted in a critical discussion of the theoretical, empirical and political-economy pitfalls of the neoclassical approach to policy analysis. Our discussion indicates that we are still far from building a science of economic policy. We suggest that a more fruitful research avenue to pursue is to explore alternative theoretical paradigms, which can escape the strong theoretical requirements of neoclassical models (e.g., equilibrium, rationality, etc.). We briefly introduce one of the most successful alternative research projects -- known in the literature as agent-based computational economics (ACE) -- and we present the way it has been applied to policy analysis issues. We conclude by discussing the methodological status of ACE, as well as the (many) problems it raises.Economic Policy, Monetary Policy, New Neoclassical Synthesis, New Keynesian Models, DSGE Models, Agent-Based Computational Economics, Agent-Based Models, Post-Walrasian Macroeconomics, Evolutionary Economics.
Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles
We present a canonical way to turn any smooth parametric family of
probability distributions on an arbitrary search space into a
continuous-time black-box optimization method on , the
\emph{information-geometric optimization} (IGO) method. Invariance as a design
principle minimizes the number of arbitrary choices. The resulting \emph{IGO
flow} conducts the natural gradient ascent of an adaptive, time-dependent,
quantile-based transformation of the objective function. It makes no
assumptions on the objective function to be optimized.
The IGO method produces explicit IGO algorithms through time discretization.
It naturally recovers versions of known algorithms and offers a systematic way
to derive new ones. The cross-entropy method is recovered in a particular case,
and can be extended into a smoothed, parametrization-independent maximum
likelihood update (IGO-ML). For Gaussian distributions on , IGO
is related to natural evolution strategies (NES) and recovers a version of the
CMA-ES algorithm. For Bernoulli distributions on , we recover the
PBIL algorithm. From restricted Boltzmann machines, we obtain a novel algorithm
for optimization on . All these algorithms are unified under a
single information-geometric optimization framework.
Thanks to its intrinsic formulation, the IGO method achieves invariance under
reparametrization of the search space , under a change of parameters of the
probability distributions, and under increasing transformations of the
objective function.
Theory strongly suggests that IGO algorithms have minimal loss in diversity
during optimization, provided the initial diversity is high. First experiments
using restricted Boltzmann machines confirm this insight. Thus IGO seems to
provide, from information theory, an elegant way to spontaneously explore
several valleys of a fitness landscape in a single run.Comment: Final published versio
Linear Regression from Strategic Data Sources
Linear regression is a fundamental building block of statistical data
analysis. It amounts to estimating the parameters of a linear model that maps
input features to corresponding outputs. In the classical setting where the
precision of each data point is fixed, the famous Aitken/Gauss-Markov theorem
in statistics states that generalized least squares (GLS) is a so-called "Best
Linear Unbiased Estimator" (BLUE). In modern data science, however, one often
faces strategic data sources, namely, individuals who incur a cost for
providing high-precision data.
In this paper, we study a setting in which features are public but
individuals choose the precision of the outputs they reveal to an analyst. We
assume that the analyst performs linear regression on this dataset, and
individuals benefit from the outcome of this estimation. We model this scenario
as a game where individuals minimize a cost comprising two components: (a) an
(agent-specific) disclosure cost for providing high-precision data; and (b) a
(global) estimation cost representing the inaccuracy in the linear model
estimate. In this game, the linear model estimate is a public good that
benefits all individuals. We establish that this game has a unique non-trivial
Nash equilibrium. We study the efficiency of this equilibrium and we prove
tight bounds on the price of stability for a large class of disclosure and
estimation costs. Finally, we study the estimator accuracy achieved at
equilibrium. We show that, in general, Aitken's theorem does not hold under
strategic data sources, though it does hold if individuals have identical
disclosure costs (up to a multiplicative factor). When individuals have
non-identical costs, we derive a bound on the improvement of the equilibrium
estimation cost that can be achieved by deviating from GLS, under mild
assumptions on the disclosure cost functions.Comment: This version (v3) extends the results on the sub-optimality of GLS
(Section 6) and improves writing in multiple places compared to v2. Compared
to the initial version v1, it also fixes an error in Theorem 6 (now Theorem
5), and extended many of the result
Quantum Multiobservable Control
We present deterministic algorithms for the simultaneous control of an
arbitrary number of quantum observables. Unlike optimal control approaches
based on cost function optimization, quantum multiobservable tracking control
(MOTC) is capable of tracking predetermined homotopic trajectories to target
expectation values in the space of multiobservables. The convergence of these
algorithms is facilitated by the favorable critical topology of quantum control
landscapes. Fundamental properties of quantum multiobservable control
landscapes that underlie the efficiency of MOTC, including the multiobservable
controllability Gramian, are introduced. The effects of multiple control
objectives on the structure and complexity of optimal fields are examined. With
minor modifications, the techniques described herein can be applied to general
quantum multiobjective control problems.Comment: To appear in Physical Review
An efficient null space inexact Newton method for hydraulic simulation of water distribution networks
Null space Newton algorithms are efficient in solving the nonlinear equations
arising in hydraulic analysis of water distribution networks. In this article,
we propose and evaluate an inexact Newton method that relies on partial updates
of the network pipes' frictional headloss computations to solve the linear
systems more efficiently and with numerical reliability. The update set
parameters are studied to propose appropriate values. Different null space
basis generation schemes are analysed to choose methods for sparse and
well-conditioned null space bases resulting in a smaller update set. The Newton
steps are computed in the null space by solving sparse, symmetric positive
definite systems with sparse Cholesky factorizations. By using the constant
structure of the null space system matrices, a single symbolic factorization in
the Cholesky decomposition is used multiple times, reducing the computational
cost of linear solves. The algorithms and analyses are validated using medium
to large-scale water network models.Comment: 15 pages, 9 figures, Preprint extension of Abraham and Stoianov, 2015
(https://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0001089), September 2015.
Includes extended exposition, additional case studies and new simulations and
analysi
- …