698,912 research outputs found
Three Inadequate Models
The connection between operational and denotational semantics is of longstanding
interest in the study of programming languages. One naturally seeks positive results.
For example in [FP94, Sim99] adequacy results are given for models in a variety of
categories. Again, the failure of full abstraction in the standard models constructed
using complete partial orders and continuous functions [Plo77, Mil77] prompted the
exploration of other categories (see, e.g., [BCL85, FJM96, AM98, AC98]) with varying
degrees of success.
In this paper we interest ourselves in counterexamples in order to make a case that
these natural avenues of research had a degree of necessity. To this end, we construct
inadequate models and investigate whether one can do better than the standard
model, but still stay in the category of complete partial orders. (In contrast, an
inadequate standard model of PCF is given in [Sim99]—but in a specially constructed
category.
Belief Revision in Science: Informational Economy and Paraconsistency
In the present paper, our objective is to examine the application of belief revision models to scientific rationality. We begin by considering the standard model AGM, and along the way a number of problems surface that make it seem inadequate for this specific application. After considering three different heuristics of informational economy that seem fit for science, we consider some possible adaptations for it and argue informally that, overall, some paraconsistent models seem to better satisfy these principles, following Testa (2015). These models have been worked out in formal detail by Testa, Cogniglio, & Ribeiro (2015, 2017)
A numerical code for a three-dimensional magnetospheric MHD equilibrium model
Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution
Models of cuspy triaxial stellar systems. III: The effect of velocity anisotropy on chaoticity
In several previous investigations we presented models of triaxial stellar
systems, both cuspy and non cuspy, that were highly stable and harboured large
fractions of chaotic orbits. All our models had been obtained through cold
collapses of initially spherical --body systems, a method that necessarily
results in models with strongly radial velocity distributions. Here we
investigate a different method that was reported to yield cuspy triaxial models
with virtually no chaos. We show that such result was probably due to the use
of an inadequate chaos detection technique and that, in fact, models with
significant fractions of chaotic orbits result also from that method. Besides,
starting with one of the models from the first paper in this series, we
obtained three different models by rendering its velocity distribution much
less radially biased (i.e., more isotropic) and by modifying its axial ratios
through adiabatic compression. All three models yielded much higher fractions
of regular orbits than most of those from our previous work. We conclude that
it is possible to obtain stable cuspy triaxial models of stellar systems whose
velocity distribution is more isotropic than that of the models obtained from
cold collapses. Those models still harbour large fractions of chaotic orbits
and, although it is difficult to compare the results from different models, we
can tentatively conclude that chaoticity is reduced by velocity isotropy.Comment: 11 pages, 14 figures. Accepted for publication in MNRA
Toward connecting core-collapse supernova theory with observations: I. Shock revival in a 15 Msun blue supergiant progenitor with SN 1987A energetics
We study the evolution of the collapsing core of a 15 Msun blue supergiant
supernova progenitor from the core bounce until 1.5 seconds later. We present a
sample of hydrodynamic models parameterized to match the explosion energetics
of SN 1987A.
We find the spatial model dimensionality to be an important contributing
factor in the explosion process. Compared to two-dimensional simulations, our
three-dimensional models require lower neutrino luminosities to produce equally
energetic explosions. We estimate that the convective engine in our models is
4% more efficient in three dimensions than in two dimensions. We propose that
the greater efficiency of the convective engine found in three-dimensional
simulations might be due to the larger surface-to-volume ratio of convective
plumes, which aids in distributing energy deposited by neutrinos.
We do not find evidence of the standing accretion shock instability nor
turbulence being a key factor in powering the explosion in our models. Instead,
the analysis of the energy transport in the post-shock region reveals
characteristics of penetrative convection. The explosion energy decreases
dramatically once the resolution is inadequate to capture the morphology of
convection on large scales. This shows that the role of dimensionality is
secondary to correctly accounting for the basic physics of the explosion.
We also analyze information provided by particle tracers embedded in the
flow, and find that the unbound material has relatively long residency times in
two-dimensional models, while in three dimensions a significant fraction of the
explosion energy is carried by particles with relatively short residency times.Comment: accepted for publication in Astrophysical Journa
Flavor Physics in SO(10) GUTs with Suppressed Proton decay Due to Gauged Discrete Symmetry
Generic SO(10) GUT models suffer from the problem that Planck scale induced
non-renormalizable proton decay operators require extreme suppression of their
couplings to be compatible with present experimental upper limits. One way to
resolve this problem is to supplement SO(10) by simple gauged discrete
symmetries which can also simultaneously suppress the renormalizable R-parity
violating ones when they occur and make the theory "more natural". Here we
discuss the phenomenological viability of such models. We first show that for
both classes of models, e.g the ones that use or to
break B-L symmetry, the minimal Higgs content which is sufficient for proton
decay suppression is inadequate for explaining fermion masses despite the
presence of all apparently needed couplings. We then present an extended model, with three {\bf 10} and three {\bf 45}-Higgs, where is free of
this problem. We propose this as a realistic and "natural" model for fermion
unification and discuss the phenomenology of this model e.g. its predictions
for neutrino mixings and lepton flavor violation.Comment: 21 pages, 2 figure
The importance of distinct modeling strategies for gene and gene-specific treatment effects in hierarchical models for microarray data
When analyzing microarray data, hierarchical models are often used to share
information across genes when estimating means and variances or identifying
differential expression. Many methods utilize some form of the two-level
hierarchical model structure suggested by Kendziorski et al. [Stat. Med. (2003)
22 3899-3914] in which the first level describes the distribution of latent
mean expression levels among genes and among differentially expressed
treatments within a gene. The second level describes the conditional
distribution, given a latent mean, of repeated observations for a single gene
and treatment. Many of these models, including those used in Kendziorski's et
al. [Stat. Med. (2003) 22 3899-3914] EBarrays package, assume that expression
level changes due to treatment effects have the same distribution as expression
level changes from gene to gene. We present empirical evidence that this
assumption is often inadequate and propose three-level hierarchical models as
extensions to the two-level log-normal based EBarrays models to address this
inadequacy. We demonstrate that use of our three-level models dramatically
changes analysis results for a variety of microarray data sets and verify the
validity and improved performance of our suggested method in a series of
simulation studies. We also illustrate the importance of accounting for the
uncertainty of gene-specific error variance estimates when using hierarchical
models to identify differentially expressed genes.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS535 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The political economy of growth and distribution: A theoretical critique
This paper reconsiders the political economy approach to growth and distribution according to which (1) rising inequality induces more government redistribution; (2) more government redistribution is financed by higher distortionary taxation; and (3) higher distortionary taxes reduce economic growth. We present a variety of theoretical arguments demonstrating that all three propositions may be overturned by simply changing an assumption in a plausible way or adding a relevant real-world element to the basal models. The political economy models of growth and distribution, as well as the specific inequality-growth transmission channel they propose, must therefore be assessed as overly simplistic and inadequate with respect to the issues studied. --Political Economy,Redistribution,Inequality,Economic growth
- …