560 research outputs found
Robust Henderson III estimators of variance components in the nested error model
Common methods for estimating variance components in Linear Mixed Models include Maximum Likelihood (ML) and Restricted Maximum Likelihood (REML). These methods are based on the strong assumption of multivariate normal distribution and it is well know that they are very sensitive to outlying observations with respect to any of the random components. Several robust altematives of these methods have been proposed (e.g. Fellner 1986, Richardson and Welsh 1995). In this work we present several robust alternatives based on the Henderson method III which do not rely on the normality assumption and provide explicit solutions for the variance components estimators. These estimators can later be used to derive robust estimators of regression coefficients. Finally, we describe an application of this procedure to small area estimation, in which the main target is the estimation of the means of areas or domains when the within-area sample sizes are small
Recommended from our members
Functional Description of the EBR-II Digital Data Acquisition System.
Spatial updating in narratives.
Across two experiments we investigated spatial updating in
environments encoded through narratives. In Experiment 1, in which
participants were given visualization instructions to imagine the protagonistâs
movement, they formed an initial representation during learning but did not
update it during subsequent described movement. In Experiment 2, in which
participants were instructed to physically move in space towards the directions
of the described objects prior to testing, there was evidence for spatial updating.
Overall, findings indicate that physical movement can cause participants to link
a spatial representation of a remote environment to a sensorimotor framework
and update the locations of remote objects while they move
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
First Observation of Coherent Production in Neutrino Nucleus Interactions with 2 GeV
The MiniBooNE experiment at Fermilab has amassed the largest sample to date
of s produced in neutral current (NC) neutrino-nucleus interactions at
low energy. This paper reports a measurement of the momentum distribution of
s produced in mineral oil (CH) and the first observation of coherent
production below 2 GeV. In the forward direction, the yield of events
observed above the expectation for resonant production is attributed primarily
to coherent production off carbon, but may also include a small contribution
from diffractive production on hydrogen. Integrated over the MiniBooNE neutrino
flux, the sum of the NC coherent and diffractive modes is found to be (19.5
1.1 (stat) 2.5 (sys))% of all exclusive NC production at
MiniBooNE. These measurements are of immediate utility because they quantify an
important background to MiniBooNE's search for
oscillations.Comment: Submitted to Phys. Lett.
CANDELS : constraining the AGN-merger connection with host morphologies at z ~ 2
Using Hubble Space Telescope/WFC3 imaging taken as part of the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey, we examine the role that major galaxy mergers play in triggering active galactic nucleus (AGN) activity at z ~ 2. Our sample consists of 72 moderate-luminosity (L X ~ 1042-44 erg s-1) AGNs at 1.5 < z < 2.5 that are selected using the 4 Ms Chandra observations in the Chandra Deep Field South, the deepest X-ray observations to date. Employing visual classifications, we have analyzed the rest-frame optical morphologies of the AGN host galaxies and compared them to a mass-matched control sample of 216 non-active galaxies at the same redshift. We find that most of the AGNs reside in disk galaxies (51.4+5.8 - 5.9%), while a smaller percentage are found in spheroids (27.8+5.8 - 4.6%). Roughly 16.7+5.3 - 3.5% of the AGN hosts have highly disturbed morphologies and appear to be involved in a major merger or interaction, while most of the hosts (55.6+5.6 - 5.9%) appear relatively relaxed and undisturbed. These fractions are statistically consistent with the fraction of control galaxies that show similar morphological disturbances. These results suggest that the hosts of moderate-luminosity AGNs are no more likely to be involved in an ongoing merger or interaction relative to non-active galaxies of similar mass at z ~ 2. The high disk fraction observed among the AGN hosts also appears to be at odds with predictions that merger-driven accretion should be the dominant AGN fueling mode at z ~ 2, even at moderate X-ray luminosities. Although we cannot rule out that minor mergers are responsible for triggering these systems, the presence of a large population of relatively undisturbed disk-like hosts suggests that the stochastic accretion of gas plays a greater role in fueling AGN activity at z ~ 2 than previously thought
An Integrated TCGA Pan-Cancer Clinical Data Resource to Drive High-Quality Survival Outcome Analytics
For a decade, The Cancer Genome Atlas (TCGA) program collected clinicopathologic annotation data along with multi-platform molecular profiles of more than 11,000 human tumors across 33 different cancer types. TCGA clinical data contain key features representing the democratized nature of the data collection process. To ensure proper use of this large clinical dataset associated with genomic features, we developed a standardized dataset named the TCGA Pan-Cancer Clinical Data Resource (TCGA-CDR), which includes four major clinical outcome endpoints. In addition to detailing major challenges and statistical limitations encountered during the effort of integrating the acquired clinical data, we present a summary that includes endpoint usage recommendations for each cancer type. These TCGA-CDR findings appear to be consistent with cancer genomics studies independent of the TCGA effort and provide opportunities for investigating cancer biology using clinical correlates at an unprecedented scale. Analysis of clinicopathologic annotations for over 11,000 cancer patients in the TCGA program leads to the generation of TCGA Clinical Data Resource, which provides recommendations of clinical outcome endpoint usage for 33 cancer types
Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV
Results are presented from a search for a W' boson using a dataset
corresponding to 5.0 inverse femtobarns of integrated luminosity collected
during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV.
The W' boson is modeled as a heavy W boson, but different scenarios for the
couplings to fermions are considered, involving both left-handed and
right-handed chiral projections of the fermions, as well as an arbitrary
mixture of the two. The search is performed in the decay channel W' to t b,
leading to a final state signature with a single lepton (e, mu), missing
transverse energy, and jets, at least one of which is tagged as a b-jet. A W'
boson that couples to fermions with the same coupling constant as the W, but to
the right-handed rather than left-handed chiral projections, is excluded for
masses below 1.85 TeV at the 95% confidence level. For the first time using LHC
data, constraints on the W' gauge coupling for a set of left- and right-handed
coupling combinations have been placed. These results represent a significant
improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe
Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV
A search for a Higgs boson decaying into two photons is described. The
analysis is performed using a dataset recorded by the CMS experiment at the LHC
from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an
integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross
section of the standard model Higgs boson decaying to two photons. The expected
exclusion limit at 95% confidence level is between 1.4 and 2.4 times the
standard model cross section in the mass range between 110 and 150 GeV. The
analysis of the data excludes, at 95% confidence level, the standard model
Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The
largest excess of events above the expected standard model background is
observed for a Higgs boson mass hypothesis of 124 GeV with a local significance
of 3.1 sigma. The global significance of observing an excess with a local
significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is
estimated to be 1.8 sigma. More data are required to ascertain the origin of
this excess.Comment: Submitted to Physics Letters
Measurement of the Lambda(b) cross section and the anti-Lambda(b) to Lambda(b) ratio with Lambda(b) to J/Psi Lambda decays in pp collisions at sqrt(s) = 7 TeV
The Lambda(b) differential production cross section and the cross section
ratio anti-Lambda(b)/Lambda(b) are measured as functions of transverse momentum
pt(Lambda(b)) and rapidity abs(y(Lambda(b))) in pp collisions at sqrt(s) = 7
TeV using data collected by the CMS experiment at the LHC. The measurements are
based on Lambda(b) decays reconstructed in the exclusive final state J/Psi
Lambda, with the subsequent decays J/Psi to an opposite-sign muon pair and
Lambda to proton pion, using a data sample corresponding to an integrated
luminosity of 1.9 inverse femtobarns. The product of the cross section times
the branching ratio for Lambda(b) to J/Psi Lambda versus pt(Lambda(b)) falls
faster than that of b mesons. The measured value of the cross section times the
branching ratio for pt(Lambda(b)) > 10 GeV and abs(y(Lambda(b))) < 2.0 is 1.06
+/- 0.06 +/- 0.12 nb, and the integrated cross section ratio for
anti-Lambda(b)/Lambda(b) is 1.02 +/- 0.07 +/- 0.09, where the uncertainties are
statistical and systematic, respectively.Comment: Submitted to Physics Letters
- âŚ