5,047 research outputs found
Massive Higher Spin Fields Coupled to a Scalar: Aspects of Interaction and Causality
We consider in detail the most general cubic Lagrangian which describes an
interaction between two identical higher spin fieldsin a triplet formulation
with a scalar field, all fields having the same values of the mass. After
performing the gauge fixing procedure we find that for the case of massive
fields the gauge invariance does not guarantee the preservation of the correct
number of propagating physical degrees of freedom. In order to get the correct
number of degrees of freedom for the massive higher spin field one should
impose some additional conditions on parameters of the vertex. Further
independent constraints are provided by the causality analysis, indicating that
the requirement of causality should be imposed in addition to the requirement
of gauge invariance in order to have a consistent propagation of massive higher
spin fields.Comment: 34 pages, comments, references and one Appendix added. Typos
corrected. Published versio
A generalized Fellner-Schall method for smoothing parameter estimation with application to Tweedie location, scale and shape models
We consider the estimation of smoothing parameters and variance components in
models with a regular log likelihood subject to quadratic penalization of the
model coefficients, via a generalization of the method of Fellner (1986) and
Schall (1991). In particular: (i) we generalize the original method to the case
of penalties that are linear in several smoothing parameters, thereby covering
the important cases of tensor product and adaptive smoothers; (ii) we show why
the method's steps increase the restricted marginal likelihood of the model,
that it tends to converge faster than the EM algorithm, or obvious
accelerations of this, and investigate its relation to Newton optimization;
(iii) we generalize the method to any Fisher regular likelihood. The method
represents a considerable simplification over existing methods of estimating
smoothing parameters in the context of regular likelihoods, without sacrificing
generality: for example, it is only necessary to compute with the same first
and second derivatives of the log-likelihood required for coefficient
estimation, and not with the third or fourth order derivatives required by
alternative approaches. Examples are provided which would have been impossible
or impractical with pre-existing Fellner-Schall methods, along with an example
of a Tweedie location, scale and shape model which would be a challenge for
alternative methods
X-ray Lighthouses of the High-Redshift Universe. II. Further Snapshot Observations of the Most Luminous z>4 Quasars with Chandra
We report on Chandra observations of a sample of 11 optically luminous
(Mb<-28.5) quasars at z=3.96-4.55 selected from the Palomar Digital Sky Survey
and the Automatic Plate Measuring Facility Survey. These are among the most
luminous z>4 quasars known and hence represent ideal witnesses of the end of
the "dark age ''. Nine quasars are detected by Chandra, with ~2-57 counts in
the observed 0.5-8 keV band. These detections increase the number of X-ray
detected AGN at z>4 to ~90; overall, Chandra has detected ~85% of the
high-redshift quasars observed with snapshot (few kilosecond) observations. PSS
1506+5220, one of the two X-ray undetected quasars, displays a number of
notable features in its rest-frame ultraviolet spectrum, the most prominent
being broad, deep SiIV and CIV absorption lines. The average optical-to-X-ray
spectral index for the present sample (=-1.88+/-0.05) is steeper than
that typically found for z>4 quasars but consistent with the expected value
from the known dependence of this spectral index on quasar luminosity.
We present joint X-ray spectral fitting for a sample of 48 radio-quiet
quasars in the redshift range 3.99-6.28 for which Chandra observations are
available. The X-ray spectrum (~870 counts) is well parameterized by a power
law with Gamma=1.93+0.10/-0.09 in the rest-frame ~2-40 keV band, and a tight
upper limit of N_H~5x10^21 cm^-2 is obtained on any average intrinsic X-ray
absorption. There is no indication of any significant evolution in the X-ray
properties of quasars between redshifts zero and six, suggesting that the
physical processes of accretion onto massive black holes have not changed over
the bulk of cosmic time.Comment: 15 pages, 7 figures, accepted for publication in A
A Critical Examination of Quartz, Tridymite and Cristobalite: The Estimation of Free Silica
1. The introduction records the various theories on the aetiology of silicosis and pneumokoniosis with reference to the pathological and chemical aspects of the changes brought about in the lungs by small silica particles. A survey is also given of physical and chemical analytical techniques for the estimation of the free silica content of rocks and dusts. 2. Detailed investigation of one chemical method, that of Trostel and Wynne, has shown it to be suitable, after modification, for the analysis of Stirlingshire coal-measure rocks. However, the high experimental losses occurring during the analysis of small particles preclude its use for the determination of the quartz content of airborne dusts. 3. The examination of the physical method of Differential Thermal Analysis has shown it to be influenced by the presence of a layer of non-quartz silica,which is not estimated,on ground quartz particles. The properties of this layer have been investigated and it has been concluded that it is of relatively constant thickness and density independent of the particle size of the quartz. The proportion of layer to quartz in airborne dusts is considerable and prevents the use of this method, as it stands, for their analysis. During rock analysis this method suffers some loss in accuracy due to the presence of the non-quartz layer hut suggestions are made to produce at least a partial recovery of the loss and also to make the method practicable for dust analysis. 4. The presence of this non-quartz layer has been shown to interfere with the X-ray analysis for quartz in dusts
Designing a Belief Function-Based Accessibility Indicator to Improve Web Browsing for Disabled People
The purpose of this study is to provide an accessibility measure of
web-pages, in order to draw disabled users to the pages that have been designed
to be ac-cessible to them. Our approach is based on the theory of belief
functions, using data which are supplied by reports produced by automatic web
content assessors that test the validity of criteria defined by the WCAG 2.0
guidelines proposed by the World Wide Web Consortium (W3C) organization. These
tools detect errors with gradual degrees of certainty and their results do not
always converge. For these reasons, to fuse information coming from the
reports, we choose to use an information fusion framework which can take into
account the uncertainty and imprecision of infor-mation as well as divergences
between sources. Our accessibility indicator covers four categories of
deficiencies. To validate the theoretical approach in this context, we propose
an evaluation completed on a corpus of 100 most visited French news websites,
and 2 evaluation tools. The results obtained illustrate the interest of our
accessibility indicator
Evidential-EM Algorithm Applied to Progressively Censored Observations
Evidential-EM (E2M) algorithm is an effective approach for computing maximum
likelihood estimations under finite mixture models, especially when there is
uncertain information about data. In this paper we present an extension of the
E2M method in a particular case of incom-plete data, where the loss of
information is due to both mixture models and censored observations. The prior
uncertain information is expressed by belief functions, while the
pseudo-likelihood function is derived based on imprecise observations and prior
knowledge. Then E2M method is evoked to maximize the generalized likelihood
function to obtain the optimal estimation of parameters. Numerical examples
show that the proposed method could effectively integrate the uncertain prior
infor-mation with the current imprecise knowledge conveyed by the observed
data
Toward the estimation of background fluctuations under newly-observed signals in particle physics
When the number of events associated with a signal process is estimated in particle physics, it is common practice to extrapolate background distributions from control regions to a predefined signal window. This allows accurate estimation of the expected, or average, number of background events under the signal. However, in general, the actual number of background events can deviate from the average due to fluctuations in the data. Such a difference can be sizable when compared to the number of signal events in the early stages of data analysis following the observation of a new particle, as well as in the analysis of rare decay channels. We report on the development of a data-driven technique that aims to estimate the actual, as opposed to the expected, number of background events in a predefined signal window. We discuss results on toy Monte Carlo data and provide a preliminary estimate of systematic uncertainty
Application of Monte Carlo Algorithms to the Bayesian Analysis of the Cosmic Microwave Background
Power spectrum estimation and evaluation of associated errors in the presence
of incomplete sky coverage; non-homogeneous, correlated instrumental noise; and
foreground emission is a problem of central importance for the extraction of
cosmological information from the cosmic microwave background. We develop a
Monte Carlo approach for the maximum likelihood estimation of the power
spectrum. The method is based on an identity for the Bayesian posterior as a
marginalization over unknowns. Maximization of the posterior involves the
computation of expectation values as a sample average from maps of the cosmic
microwave background and foregrounds given some current estimate of the power
spectrum or cosmological model, and some assumed statistical characterization
of the foregrounds. Maps of the CMB are sampled by a linear transform of a
Gaussian white noise process, implemented numerically with conjugate gradient
descent. For time series data with N_{t} samples, and N pixels on the sphere,
the method has a computational expense $KO[N^{2} +- N_{t} +AFw-log N_{t}],
where K is a prefactor determined by the convergence rate of conjugate gradient
descent. Preconditioners for conjugate gradient descent are given for scans
close to great circle paths, and the method allows partial sky coverage for
these cases by numerically marginalizing over the unobserved, or removed,
region.Comment: submitted to Ap
- …