27,110 research outputs found
Eliminating the Hadronic Uncertainty
The Standard Model Lagrangian requires the values of the fermion masses, the
Higgs mass and three other experimentally well-measured quantities as input in
order to become predictive. These are typically taken to be ,
and . Using the first of these, however, introduces a hadronic
contribution that leads to a significant error. If a quantity could be found
that was measured at high energy with sufficient precision then it could be
used to replace as input. The level of precision required for this to
happen is given for a number of precisely-measured observables. The boson
mass must be measured with an error of \,MeV, to \,MeV
and polarization asymmetry, , to that would seem to be the
most promising candidate. The r\^ole of renormalized parameters in perturbative
calculations is reviewed and the value for the electromagnetic coupling
constant in the renormalization scheme that is consistent
with all experimental data is obtained to be .Comment: 8 pages LaTeX2
Head-On Collision of Neutron Stars As A Thought Experiment
The head-on collision of identical neutron stars from rest at infinity
requires a numerical simulation in full general relativity for a complete
solution. Undaunted, we provide a relativistic, analytic argument to suggest
that during the collision, sufficient thermal pressure is always generated to
support the hot remnant in quasi-static stable equilibrium against collapse
prior to slow cooling via neutrino emission. Our conclusion is independent of
the total mass of the progenitors and holds even if the remnant greatly exceeds
the maximum mass of a cold neutron star.Comment: to appear in Physical Review D (revtex, 3 figs, 5 pgs
Normalizers of Irreducible Subfactors
We consider normalizers of an irreducible inclusion of
factors. In the infinite index setting an inclusion
can be strict, forcing us to also investigate the semigroup
of one-sided normalizers. We relate these normalizers of in to
projections in the basic construction and show that every trace one projection
in the relative commutant is of the form for some
unitary with . This enables us to identify the
normalizers and the algebras they generate in several situations. In particular
each normalizer of a tensor product of irreducible subfactors is a tensor
product of normalizers modulo a unitary. We also examine normalizers of
irreducible subfactors arising from subgroup--group inclusions .
Here the normalizers are the normalizing group elements modulo a unitary from
. We are also able to identify the finite trace -bimodules in
as double cosets which are also finite unions of left cosets.Comment: 33 Page
A postmortem investigation of the Type IIb supernova 2001ig
We present images taken with the GMOS instrument on Gemini-South, in
excellent (<0.5 arcsec) seeing, of SN 2001ig in NGC 7424, ~1000 days after
explosion. A point source seen at the site of the SN is shown to have colours
inconsistent with being an H II region or a SN 1993J-like remnant, but can be
matched to a late-B through late-F supergiant with A_V<1. We believe this
object is the massive binary companion responsible for periodic modulation in
mass loss material around the Wolf-Rayet progenitor which gave rise to
significant structure in the SN radio light curve.Comment: 5 pages, 3 figures. Accepted for publication in MNRAS Letters. Fig. 1
resolution degraded to meet size limitations; full resolution version
available from http://www.aao.gov.au/local/www/sdr/pubs/sn2001ig_gmos.ps.g
Are the Earth and the Moon compositionally alike? Inferences on lunar composition and implications for lunar origin and evolution from geophysical modeling
The main objective of the present study is to discuss in detail the results obtained from an inversion of the Apollo lunar seismic data set, lunar mass, and moment of inertia. We inverted directly for lunar chemical composition and temperature using the model system CaO-FeO-MgO-Al2O3-SiO2. Using Gibbs free energy minimization, stable
mineral phases at the temperatures and pressures of interest, their modes and physical properties are calculated. We determine the compositional range of the oxide elements, thermal state, Mg#, mineralogy and physical structure of the lunar interior, as well as constraining core size and density. The results indicate a lunar mantle mineralogy that is dominated by olivine and orthopyroxene ( 80 vol%), with the remainder being composed of clinopyroxene and an aluminous phase (plagioclase, spinel, and garnet present in the depth ranges 0–150 km, 150–200 km, and >200 km, respectively). This model is broadly
consistent with constraints on mantle mineralogy derived from the experimental and
observational study of the phase lationships and trace element compositions of lunar
mare basalts and picritic glasses. In particular, by melting a typical model mantle
composition using the pMELTS algorithm, we found that a range of batch melts generated
from these models have features in common with low Ti mare basalts and picritic glasses. Our results also indicate a bulk lunar composition and Mg# different to that of the Earth’s upper mantle, represented by the pyrolite composition. This difference is reflected in a lower bulk lunar Mg# ( 0.83). Results also indicate a small iron-like core with a radius around 340 km.The Carlsberg Foundation, NER
Importance Sampling: Intrinsic Dimension and Computational Cost
The basic idea of importance sampling is to use independent samples from a
proposal measure in order to approximate expectations with respect to a target
measure. It is key to understand how many samples are required in order to
guarantee accurate approximations. Intuitively, some notion of distance between
the target and the proposal should determine the computational cost of the
method. A major challenge is to quantify this distance in terms of parameters
or statistics that are pertinent for the practitioner. The subject has
attracted substantial interest from within a variety of communities. The
objective of this paper is to overview and unify the resulting literature by
creating an overarching framework. A general theory is presented, with a focus
on the use of importance sampling in Bayesian inverse problems and filtering.Comment: Statistical Scienc
Well-Posedness And Accuracy Of The Ensemble Kalman Filter In Discrete And Continuous Time
The ensemble Kalman filter (EnKF) is a method for combining a dynamical model
with data in a sequential fashion. Despite its widespread use, there has been
little analysis of its theoretical properties. Many of the algorithmic
innovations associated with the filter, which are required to make a useable
algorithm in practice, are derived in an ad hoc fashion. The aim of this paper
is to initiate the development of a systematic analysis of the EnKF, in
particular to do so in the small ensemble size limit. The perspective is to
view the method as a state estimator, and not as an algorithm which
approximates the true filtering distribution. The perturbed observation version
of the algorithm is studied, without and with variance inflation. Without
variance inflation well-posedness of the filter is established; with variance
inflation accuracy of the filter, with resepct to the true signal underlying
the data, is established. The algorithm is considered in discrete time, and
also for a continuous time limit arising when observations are frequent and
subject to large noise. The underlying dynamical model, and assumptions about
it, is sufficiently general to include the Lorenz '63 and '96 models, together
with the incompressible Navier-Stokes equation on a two-dimensional torus. The
analysis is limited to the case of complete observation of the signal with
additive white noise. Numerical results are presented for the Navier-Stokes
equation on a two-dimensional torus for both complete and partial observations
of the signal with additive white noise
- …