3,968 research outputs found
The parameterized space complexity of model-checking bounded variable first-order logic
The parameterized model-checking problem for a class of first-order sentences
(queries) asks to decide whether a given sentence from the class holds true in
a given relational structure (database); the parameter is the length of the
sentence. We study the parameterized space complexity of the model-checking
problem for queries with a bounded number of variables. For each bound on the
quantifier alternation rank the problem becomes complete for the corresponding
level of what we call the tree hierarchy, a hierarchy of parameterized
complexity classes defined via space bounded alternating machines between
parameterized logarithmic space and fixed-parameter tractable time. We observe
that a parameterized logarithmic space model-checker for existential bounded
variable queries would allow to improve Savitch's classical simulation of
nondeterministic logarithmic space in deterministic space .
Further, we define a highly space efficient model-checker for queries with a
bounded number of variables and bounded quantifier alternation rank. We study
its optimality under the assumption that Savitch's Theorem is optimal
Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations
Climate projections continue to be marred by large uncertainties, which
originate in processes that need to be parameterized, such as clouds,
convection, and ecosystems. But rapid progress is now within reach. New
computational tools and methods from data assimilation and machine learning
make it possible to integrate global observations and local high-resolution
simulations in an Earth system model (ESM) that systematically learns from
both. Here we propose a blueprint for such an ESM. We outline how
parameterization schemes can learn from global observations and targeted
high-resolution simulations, for example, of clouds and convection, through
matching low-order statistics between ESMs, observations, and high-resolution
simulations. We illustrate learning algorithms for ESMs with a simple dynamical
system that shares characteristics of the climate system; and we discuss the
opportunities the proposed framework presents and the challenges that remain to
realize it.Comment: 32 pages, 3 figure
Fast Estimation of True Bounds on Bermudan Option Prices under Jump-diffusion Processes
Fast pricing of American-style options has been a difficult problem since it
was first introduced to financial markets in 1970s, especially when the
underlying stocks' prices follow some jump-diffusion processes. In this paper,
we propose a new algorithm to generate tight upper bounds on the Bermudan
option price without nested simulation, under the jump-diffusion setting. By
exploiting the martingale representation theorem for jump processes on the dual
martingale, we are able to explore the unique structure of the optimal dual
martingale and construct an approximation that preserves the martingale
property. The resulting upper bound estimator avoids the nested Monte Carlo
simulation suffered by the original primal-dual algorithm, therefore
significantly improves the computational efficiency. Theoretical analysis is
provided to guarantee the quality of the martingale approximation. Numerical
experiments are conducted to verify the efficiency of our proposed algorithm
Gravitational Wave Tests of General Relativity with the Parameterized Post-Einsteinian Framework
Gravitational wave astronomy has tremendous potential for studying extreme
astrophysical phenomena and exploring fundamental physics. The waves produced
by binary black hole mergers will provide a pristine environment in which to
study strong field, dynamical gravity. Extracting detailed information about
these systems requires accurate theoretical models of the gravitational wave
signals. If gravity is not described by General Relativity, analyses that are
based on waveforms derived from Einstein's field equations could result in
parameter biases and a loss of detection efficiency. A new class of
"parameterized post-Einsteinian" (ppE) waveforms has been proposed to cover
this eventuality. Here we apply the ppE approach to simulated data from a
network of advanced ground based interferometers (aLIGO/aVirgo) and from a
future spaced based interferometer (LISA). Bayesian inference and model
selection are used to investigate parameter biases, and to determine the level
at which departures from general relativity can be detected. We find that in
some cases the parameter biases from assuming the wrong theory can be severe.
We also find that gravitational wave observations will beat the existing bounds
on deviations from general relativity derived from the orbital decay of binary
pulsars by a large margin across a wide swath of parameter space.Comment: 16 pages, 10 figures. Modified in response to referee comment
Implementation of the LANS-alpha turbulence model in a primitive equation ocean model
This paper presents the first numerical implementation and tests of the
Lagrangian-averaged Navier-Stokes-alpha (LANS-alpha) turbulence model in a
primitive equation ocean model. The ocean model in which we work is the Los
Alamos Parallel Ocean Program (POP); we refer to POP and our implementation of
LANS-alpha as POP-alpha. Two versions of POP-alpha are presented: the full
POP-alpha algorithm is derived from the LANS-alpha primitive equations, but
requires a nested iteration that makes it too slow for practical simulations; a
reduced POP-alpha algorithm is proposed, which lacks the nested iteration and
is two to three times faster than the full algorithm. The reduced algorithm
does not follow from a formal derivation of the LANS-alpha model equations.
Despite this, simulations of the reduced algorithm are nearly identical to the
full algorithm, as judged by globally averaged temperature and kinetic energy,
and snapshots of temperature and velocity fields. Both POP-alpha algorithms can
run stably with longer timesteps than standard POP.
Comparison of implementations of full and reduced POP-alpha algorithms are
made within an idealized test problem that captures some aspects of the
Antarctic Circumpolar Current, a problem in which baroclinic instability is
prominent. Both POP-alpha algorithms produce statistics that resemble
higher-resolution simulations of standard POP.
A linear stability analysis shows that both the full and reduced POP-alpha
algorithms benefit from the way the LANS-alpha equations take into account the
effects of the small scales on the large. Both algorithms (1) are stable; (2)
make the Rossby Radius effectively larger; and (3) slow down Rossby and gravity
waves.Comment: Submitted to J. Computational Physics March 21, 200
Fuzzy-rough set models and fuzzy-rough data reduction
Rough set theory is a powerful tool to analysis the information systems. Fuzzy rough set is introduced as a fuzzy generalization of rough sets. This paper reviewed the most important contributions to the rough set theory, fuzzy rough set theory and their applications. In many real world situations, some of the attribute values for an object may be in the set-valued form. In this paper, to handle this problem, we present a more general approach to the fuzzification of rough sets. Specially, we define a broad family of fuzzy rough sets. This paper presents a new development for the rough set theory by incorporating the classical rough set theory and the interval-valued fuzzy sets. The proposed methods are illustrated by an numerical example on the real case
- …