6,730 research outputs found
Measuring the ratio of and couplings through production
For a generic Higgs boson, measuring the relative sign and magnitude of its
couplings with the and bosons is essential in determining its origin.
Such a test is also indispensable for the 125-GeV Higgs boson. We propose that
the ratio of the and couplings can be directly
determined through the production, where denotes a generic Higgs
boson, owing to the tree-level interference effect. While this is impractical
at the LHC due to the limited sensitivity, it can be done at future
colliders, such as a 500-GeV ILC with the beam polarization
in the and
channels. The discovery potential of a
general ratio and the power to discriminate it from the SM value are studied in
detail. Combining the cross section of with the
measurements of coupling at the HL-LHC, one can further improve the
sensitivity of .Comment: 24 pages, 10 figures, 2 table
Search for a heavy dark photon at future colliders
A coupling of a dark photon from a with the standard model
(SM) particles can be generated through kinetic mixing represented by a
parameter . A non-zero also induces a mixing between
and if dark photon mass is not zero. This mixing can be large when
is close to even if the parameter is small. Many
efforts have been made to constrain the parameter for a low dark
photon mass compared with the boson mass . We study the
search for dark photon in for a
dark photon mass as large as kinematically allowed at future
colliders. For large , care should be taken to properly treat possible
large mixing between and . We obtain sensitivities to the parameter
for a wide range of dark photon mass at planed colliders,
such as Circular Electron Positron Collider (CEPC), International Linear
Collider (ILC) and Future Circular Collider (FCC-ee). For the dark photon mass
, the
exclusion limits on the mixing parameter are . The CEPC with and FCC-ee with
are more sensitive than the constraint from current
LHCb measurement once the dark photon mass . For , the sensitivity at
the FCC-ee with and is better
than that at the 13~TeV LHC with , while the sensitivity at
the CEPC with and can be even
better than that at 13~TeV LHC with for
.Comment: 21 pages, 5 figures, 2 table
Nonparametric Inference via Bootstrapping the Debiased Estimator
In this paper, we propose to construct confidence bands by bootstrapping the
debiased kernel density estimator (for density estimation) and the debiased
local polynomial regression estimator (for regression analysis). The idea of
using a debiased estimator was recently employed by Calonico et al. (2018b) to
construct a confidence interval of the density function (and regression
function) at a given point by explicitly estimating stochastic variations. We
extend their ideas of using the debiased estimator and further propose a
bootstrap approach for constructing simultaneous confidence bands. This
modified method has an advantage that we can easily choose the smoothing
bandwidth from conventional bandwidth selectors and the confidence band will be
asymptotically valid. We prove the validity of the bootstrap confidence band
and generalize it to density level sets and inverse regression problems.
Simulation studies confirm the validity of the proposed confidence bands/sets.
We apply our approach to an Astronomy dataset to show its applicabilityComment: Accepted to the Electronic Journal of Statistics. 64 pages, 6 tables,
11 figure
Forecast Combination Under Heavy-Tailed Errors
Forecast combination has been proven to be a very important technique to
obtain accurate predictions. In many applications, forecast errors exhibit
heavy tail behaviors for various reasons. Unfortunately, to our knowledge,
little has been done to deal with forecast combination for such situations. The
familiar forecast combination methods such as simple average, least squares
regression, or those based on variance-covariance of the forecasts, may perform
very poorly. In this paper, we propose two nonparametric forecast combination
methods to address the problem. One is specially proposed for the situations
that the forecast errors are strongly believed to have heavy tails that can be
modeled by a scaled Student's t-distribution; the other is designed for
relatively more general situations when there is a lack of strong or consistent
evidence on the tail behaviors of the forecast errors due to shortage of data
and/or evolving data generating process. Adaptive risk bounds of both methods
are developed. Simulations and a real example show superior performance of the
new methods
Nucleation of membrane adhesions
Recent experimental and theoretical studies of biomimetic membrane adhesions [Bruinsma et al., Phys. Rev. E 61, 4253 (2000); Boulbitch et al., Biophys. J. 81, 2743 (2001)] suggested that adhesion mediated by receptor interactions is due to the interplay between membrane undulations and a double-well adhesion potential, and should be a first-order transition. We study the nucleation of membrane adhesion by finding the minimum-energy path on the free energy surface constructed from the bending free energy of the membrane and the double-well adhesion potential. We find a nucleation free energy barrier around 20kBT for adhesion of flexible membranes, which corresponds to fast nucleation kinetics with a time scale of the order of seconds. For cell membranes with a larger bending rigidity due to the actin network, the nucleation barrier is higher and may require active processes such as the reorganization of the cortex network to overcome this barrier. Our scaling analysis suggests that the geometry of the membrane shapes of the adhesion contact is controlled by the adhesion length that is determined by the membrane rigidity, the barrier height, and the length scale of the double-well potential, while the energetics of adhesion is determined by the depths of the adhesion potential. These results are verified by numerical calculations
- …