1,373 research outputs found
Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abundant in
machine learning. Some examples include regression models with norm constraints
(e.g., Lasso), probit, many copula models, and latent Dirichlet allocation
(LDA). Bayesian inference involving probability distributions confined to
constrained domains could be quite challenging for commonly used sampling
algorithms. In this paper, we propose a novel augmentation technique that
handles a wide range of constraints by mapping the constrained domain to a
sphere in the augmented space. By moving freely on the surface of this sphere,
sampling algorithms handle constraints implicitly and generate proposals that
remain within boundaries when mapped back to the original space. Our proposed
method, called {Spherical Augmentation}, provides a mathematically natural and
computationally efficient framework for sampling from constrained probability
distributions. We show the advantages of our method over state-of-the-art
sampling algorithms, such as exact Hamiltonian Monte Carlo, using several
examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian
bridge regression, reconstruction of quantized stationary Gaussian process, and
LDA for topic modeling.Comment: 41 pages, 13 figure
A mathematical model for breath gas analysis of volatile organic compounds with special emphasis on acetone
Recommended standardized procedures for determining exhaled lower respiratory
nitric oxide and nasal nitric oxide have been developed by task forces of the
European Respiratory Society and the American Thoracic Society. These
recommendations have paved the way for the measurement of nitric oxide to
become a diagnostic tool for specific clinical applications. It would be
desirable to develop similar guidelines for the sampling of other trace gases
in exhaled breath, especially volatile organic compounds (VOCs) which reflect
ongoing metabolism. The concentrations of water-soluble, blood-borne substances
in exhaled breath are influenced by: (i) breathing patterns affecting gas
exchange in the conducting airways; (ii) the concentrations in the
tracheo-bronchial lining fluid; (iii) the alveolar and systemic concentrations
of the compound. The classical Farhi equation takes only the alveolar
concentrations into account. Real-time measurements of acetone in end-tidal
breath under an ergometer challenge show characteristics which cannot be
explained within the Farhi setting. Here we develop a compartment model that
reliably captures these profiles and is capable of relating breath to the
systemic concentrations of acetone. By comparison with experimental data it is
inferred that the major part of variability in breath acetone concentrations
(e.g., in response to moderate exercise or altered breathing patterns) can be
attributed to airway gas exchange, with minimal changes of the underlying blood
and tissue concentrations. Moreover, it is deduced that measured end-tidal
breath concentrations of acetone determined during resting conditions and free
breathing will be rather poor indicators for endogenous levels. Particularly,
the current formulation includes the classical Farhi and the Scheid series
inhomogeneity model as special limiting cases.Comment: 38 page
Evidence for an excess of B -> D(*) Tau Nu decays
Based on the full BaBar data sample, we report improved measurements of the
ratios R(D(*)) = B(B -> D(*) Tau Nu)/B(B -> D(*) l Nu), where l is either e or
mu. These ratios are sensitive to new physics contributions in the form of a
charged Higgs boson. We measure R(D) = 0.440 +- 0.058 +- 0.042 and R(D*) =
0.332 +- 0.024 +- 0.018, which exceed the Standard Model expectations by 2.0
sigma and 2.7 sigma, respectively. Taken together, our results disagree with
these expectations at the 3.4 sigma level. This excess cannot be explained by a
charged Higgs boson in the type II two-Higgs-doublet model. We also report the
observation of the decay B -> D Tau Nu, with a significance of 6.8 sigma.Comment: Expanded section on systematics, text corrections, improved the
format of Figure 2 and included the effect of the change of the Tau
polarization due to the charged Higg
A search for the decay modes B+/- to h+/- tau l
We present a search for the lepton flavor violating decay modes B+/- to h+/-
tau l (h= K,pi; l= e,mu) using the BaBar data sample, which corresponds to 472
million BBbar pairs. The search uses events where one B meson is fully
reconstructed in one of several hadronic final states. Using the momenta of the
reconstructed B, h, and l candidates, we are able to fully determine the tau
four-momentum. The resulting tau candidate mass is our main discriminant
against combinatorial background. We see no evidence for B+/- to h+/- tau l
decays and set a 90% confidence level upper limit on each branching fraction at
the level of a few times 10^-5.Comment: 15 pages, 7 figures, submitted to Phys. Rev.
Study of the reaction e^{+}e^{-} -->J/psi\pi^{+}\pi^{-} via initial-state radiation at BaBar
We study the process with
initial-state-radiation events produced at the PEP-II asymmetric-energy
collider. The data were recorded with the BaBar detector at center-of-mass
energies 10.58 and 10.54 GeV, and correspond to an integrated luminosity of 454
. We investigate the mass
distribution in the region from 3.5 to 5.5 . Below 3.7
the signal dominates, and above 4
there is a significant peak due to the Y(4260). A fit to
the data in the range 3.74 -- 5.50 yields a mass value
(stat) (syst) and a width value (stat)(syst) for this state. We do not
confirm the report from the Belle collaboration of a broad structure at 4.01
. In addition, we investigate the system
which results from Y(4260) decay
Moderation in management research: What, why, when and how.
Many theories in management, psychology, and other disciplines rely on moderating variables: those which affect the strength or nature of the relationship between two other variables. Despite the near-ubiquitous nature of such effects, the methods for testing and interpreting them are not always well understood. This article introduces the concept of moderation and describes how moderator effects are tested and interpreted for a series of model types, beginning with straightforward two-way interactions with Normal outcomes, moving to three-way and curvilinear interactions, and then to models with non-Normal outcomes including binary logistic regression and Poisson regression. In particular, methods of interpreting and probing these latter model types, such as simple slope analysis and slope difference tests, are described. It then gives answers to twelve frequently asked questions about testing and interpreting moderator effects
Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV
The performance of muon reconstruction, identification, and triggering in CMS
has been studied using 40 inverse picobarns of data collected in pp collisions
at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection
criteria covering a wide range of physics analysis needs have been examined.
For all considered selections, the efficiency to reconstruct and identify a
muon with a transverse momentum pT larger than a few GeV is above 95% over the
whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4,
while the probability to misidentify a hadron as a muon is well below 1%. The
efficiency to trigger on single muons with pT above a few GeV is higher than
90% over the full eta range, and typically substantially better. The overall
momentum scale is measured to a precision of 0.2% with muons from Z decays. The
transverse momentum resolution varies from 1% to 6% depending on pseudorapidity
for muons with pT below 100 GeV and, using cosmic rays, it is shown to be
better than 10% in the central region up to pT = 1 TeV. Observed distributions
of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
Recommended from our members
E4 ligase–specific ubiquitination hubs coordinate DNA double-strand-break repair and apoptosis
Multiple protein ubiquitination events at DNA double-strand breaks (DSBs) regulate damage recognition, signaling and repair. It has remained poorly understood how the repair process of DSBs is coordinated with the apoptotic response. Here, we identified the E4 ubiquitin ligase UFD-2 as a mediator of DNA-damage-induced apoptosis in a genetic screen in Caenorhabditis elegans. We found that, after initiation of homologous recombination by RAD-51, UFD-2 forms foci that contain substrate-processivity factors including the ubiquitin-selective segregase CDC-48 (p97), the deubiquitination enzyme ATX-3 (Ataxin-3) and the proteasome. In the absence of UFD-2, RAD-51 foci persist, and DNA damage-induced apoptosis is prevented. In contrast, UFD-2 foci are retained until recombination intermediates are removed by the Holliday-junction-processing enzymes GEN-1, MUS-81 or XPF-1. Formation of UFD-2 foci also requires proapoptotic CEP-1 (p53) signaling. Our findings establish a central role of UFD-2 in the coordination between the DNA-repair process and the apoptotic response
Search for the decay modes D^0 → e^+e^-, D^0 → μ^+μ^-, and D^0 → e^±μ∓
We present searches for the rare decay modes D^0→e^+e^-, D^0→μ^+μ^-, and D^0→e^±μ^∓ in continuum e^+e^-→cc events recorded by the BABAR detector in a data sample that corresponds to an integrated luminosity of 468 fb^(-1). These decays are highly Glashow–Iliopoulos–Maiani suppressed but may be enhanced in several extensions of the standard model. Our observed event yields are consistent with the expected backgrounds. An excess is seen in the D^0→μ^+μ^- channel, although the observed yield is consistent with an upward background fluctuation at the 5% level. Using the Feldman–Cousins method, we set the following 90% confidence level intervals on the branching fractions: B(D^0→e^+e^-)<1.7×10^(-7), B(D^0→μ^+μ^-) within [0.6,8.1]×10^(-7), and B(D^0→e^±μ^∓)<3.3×10^(-7)
Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV
The performance of muon reconstruction, identification, and triggering in CMS
has been studied using 40 inverse picobarns of data collected in pp collisions
at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection
criteria covering a wide range of physics analysis needs have been examined.
For all considered selections, the efficiency to reconstruct and identify a
muon with a transverse momentum pT larger than a few GeV is above 95% over the
whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4,
while the probability to misidentify a hadron as a muon is well below 1%. The
efficiency to trigger on single muons with pT above a few GeV is higher than
90% over the full eta range, and typically substantially better. The overall
momentum scale is measured to a precision of 0.2% with muons from Z decays. The
transverse momentum resolution varies from 1% to 6% depending on pseudorapidity
for muons with pT below 100 GeV and, using cosmic rays, it is shown to be
better than 10% in the central region up to pT = 1 TeV. Observed distributions
of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
- …
