291,105 research outputs found
Universal Risk Budgeting
I juxtapose Cover's vaunted universal portfolio selection algorithm (Cover
1991) with the modern representation (Qian 2016; Roncalli 2013) of a portfolio
as a certain allocation of risk among the available assets, rather than a mere
allocation of capital. Thus, I define a Universal Risk Budgeting scheme that
weights each risk budget (instead of each capital budget) by its historical
performance record (a la Cover). I prove that my scheme is mathematically
equivalent to a novel type of Cover and Ordentlich 1996 universal portfolio
that uses a new family of prior densities that have hitherto not appeared in
the literature on universal portfolio theory. I argue that my universal risk
budget, so-defined, is a potentially more perspicuous and flexible type of
universal portfolio; it allows the algorithmic trader to incorporate, with
advantage, his prior knowledge (or beliefs) about the particular covariance
structure of instantaneous asset returns. Say, if there is some dispersion in
the volatilities of the available assets, then the uniform (or Dirichlet)
priors that are standard in the literature will generate a dangerously lopsided
prior distribution over the possible risk budgets. In the author's opinion, the
proposed "Garivaltis prior" makes for a nice improvement on Cover's timeless
expert system (Cover 1991), that is properly agnostic and open (from the very
get-go) to different risk budgets. Inspired by Jamshidian 1992, the universal
risk budget is formulated as a new kind of exotic option in the continuous time
Black and Scholes 1973 market, with all the pleasure, elegance, and convenience
that that entails.Comment: 25 pages, 8 figure
Bayesian calibration of the nitrous oxide emission module of an agro-ecosystem model
Nitrous oxide (N2O) is the main biogenic greenhouse gas contributing to the global warming potential
(GWP) of agro-ecosystems. Evaluating the impact of agriculture on climate therefore requires a capacity
to predict N2O emissions in relation to environmental conditions and crop management. Biophysical
models simulating the dynamics of carbon and nitrogen in agro-ecosystems have a unique potential to
explore these relationships, but are fraught with high uncertainties in their parameters due to their
variations over time and space. Here, we used a Bayesian approach to calibrate the parameters of the N2O
submodel of the agro-ecosystem model CERES-EGC. The submodel simulates N2O emissions from the
nitrification and denitrification processes, which are modelled as the product of a potential rate with
three dimensionless factors related to soil water content, nitrogen content and temperature. These
equations involve a total set of 15 parameters, four of which are site-specific and should be measured on
site, while the other 11 are considered global, i.e. invariant over time and space. We first gathered prior
information on the model parameters based on the literature review, and assigned them uniform
probability distributions. A Bayesian method based on the Metropolis–Hastings algorithm was
subsequently developed to update the parameter distributions against a database of seven different
field-sites in France. Three parallel Markov chains were run to ensure a convergence of the algorithm.
This site-specific calibration significantly reduced the spread in parameter distribution, and the
uncertainty in the N2O simulations. The model’s root mean square error (RMSE) was also abated by 73%
across the field sites compared to the prior parameterization. The Bayesian calibration was subsequently
applied simultaneously to all data sets, to obtain better global estimates for the parameters initially
deemed universal. This made it possible to reduce the RMSE by 33% on average, compared to the
uncalibrated model. These global parameter values may be used to obtain more realistic estimates of
N2O emissions from arable soils at regional or continental scales
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor
On Universal Prediction and Bayesian Confirmation
The Bayesian framework is a well-studied and successful framework for
inductive reasoning, which includes hypothesis testing and confirmation,
parameter estimation, sequence prediction, classification, and regression. But
standard statistical guidelines for choosing the model class and prior are not
always available or fail, in particular in complex situations. Solomonoff
completed the Bayesian framework by providing a rigorous, unique, formal, and
universal choice for the model class and the prior. We discuss in breadth how
and in which sense universal (non-i.i.d.) sequence prediction solves various
(philosophical) problems of traditional Bayesian sequence prediction. We show
that Solomonoff's model possesses many desirable properties: Strong total and
weak instantaneous bounds, and in contrast to most classical continuous prior
densities has no zero p(oste)rior problem, i.e. can confirm universal
hypotheses, is reparametrization and regrouping invariant, and avoids the
old-evidence and updating problem. It even performs well (actually better) in
non-computable environments.Comment: 24 page
Is there a physically universal cellular automaton or Hamiltonian?
It is known that both quantum and classical cellular automata (CA) exist that
are computationally universal in the sense that they can simulate, after
appropriate initialization, any quantum or classical computation, respectively.
Here we introduce a different notion of universality: a CA is called physically
universal if every transformation on any finite region can be (approximately)
implemented by the autonomous time evolution of the system after the complement
of the region has been initialized in an appropriate way. We pose the question
of whether physically universal CAs exist. Such CAs would provide a model of
the world where the boundary between a physical system and its controller can
be consistently shifted, in analogy to the Heisenberg cut for the quantum
measurement problem. We propose to study the thermodynamic cost of computation
and control within such a model because implementing a cyclic process on a
microsystem may require a non-cyclic process for its controller, whereas
implementing a cyclic process on system and controller may require the
implementation of a non-cyclic process on a "meta"-controller, and so on.
Physically universal CAs avoid this infinite hierarchy of controllers and the
cost of implementing cycles on a subsystem can be described by mixing
properties of the CA dynamics. We define a physical prior on the CA
configurations by applying the dynamics to an initial state where half of the
CA is in the maximum entropy state and half of it is in the all-zero state
(thus reflecting the fact that life requires non-equilibrium states like the
boundary between a hold and a cold reservoir). As opposed to Solomonoff's
prior, our prior does not only account for the Kolmogorov complexity but also
for the cost of isolating the system during the state preparation if the
preparation process is not robust.Comment: 27 pages, 1 figur
Free Lunch for Optimisation under the Universal Distribution
Function optimisation is a major challenge in computer science. The No Free
Lunch theorems state that if all functions with the same histogram are assumed
to be equally probable then no algorithm outperforms any other in expectation.
We argue against the uniform assumption and suggest a universal prior exists
for which there is a free lunch, but where no particular class of functions is
favoured over another. We also prove upper and lower bounds on the size of the
free lunch
HypTrails: A Bayesian Approach for Comparing Hypotheses About Human Trails on the Web
When users interact with the Web today, they leave sequential digital trails
on a massive scale. Examples of such human trails include Web navigation,
sequences of online restaurant reviews, or online music play lists.
Understanding the factors that drive the production of these trails can be
useful for e.g., improving underlying network structures, predicting user
clicks or enhancing recommendations. In this work, we present a general
approach called HypTrails for comparing a set of hypotheses about human trails
on the Web, where hypotheses represent beliefs about transitions between
states. Our approach utilizes Markov chain models with Bayesian inference. The
main idea is to incorporate hypotheses as informative Dirichlet priors and to
leverage the sensitivity of Bayes factors on the prior for comparing hypotheses
with each other. For eliciting Dirichlet priors from hypotheses, we present an
adaption of the so-called (trial) roulette method. We demonstrate the general
mechanics and applicability of HypTrails by performing experiments with (i)
synthetic trails for which we control the mechanisms that have produced them
and (ii) empirical trails stemming from different domains including website
navigation, business reviews and online music played. Our work expands the
repertoire of methods available for studying human trails on the Web.Comment: Published in the proceedings of WWW'1
The SWELLS Survey. VI. hierarchical inference of the initial mass functions of bulges and discs
The long-standing assumption that the stellar initial mass function (IMF) is
universal has recently been challenged by a number of observations. Several
studies have shown that a "heavy" IMF (e.g., with a Salpeter-like abundance of
low mass stars and thus normalisation) is preferred for massive early-type
galaxies, while this IMF is inconsistent with the properties of less massive,
later-type galaxies. These discoveries motivate the hypothesis that the IMF may
vary (possibly very slightly) across galaxies and across components of
individual galaxies (e.g. bulges vs discs). In this paper we use a sample of 19
late-type strong gravitational lenses from the SWELLS survey to investigate the
IMFs of the bulges and discs in late-type galaxies. We perform a joint analysis
of the galaxies' total masses (constrained by strong gravitational lensing) and
stellar masses (constrained by optical and near-infrared colours in the context
of a stellar population synthesis [SPS] model, up to an IMF normalisation
parameter). Using minimal assumptions apart from the physical constraint that
the total stellar mass within any aperture must be less than the total mass
within the aperture, we find that the bulges of the galaxies cannot have IMFs
heavier (i.e. implying high mass per unit luminosity) than Salpeter, while the
disc IMFs are not well constrained by this data set. We also discuss the
necessity for hierarchical modelling when combining incomplete information
about multiple astronomical objects. This modelling approach allows us to place
upper limits on the size of any departures from universality. More data,
including spatially resolved kinematics (as in paper V) and stellar population
diagnostics over a range of bulge and disc masses, are needed to robustly
quantify how the IMF varies within galaxies.Comment: Accepted for publication in MNRAS. 15 pages, 8 figures. Code
available at https://github.com/eggplantbren/SWELLS_Hierarchica
- …