22,888 research outputs found
The Beylkin-Cramer Summation Rule and A New Fast Algorithm of Cosmic Statistics for Large Data Sets
Based on the Beylkin-Cramer summation rule, we introduce a new fast algorithm
that enable us to explore the high order statistics efficiently in large data
sets. Central to this technique is to make decomposition both of fields and
operators within the framework of multi-resolution analysis (MRA), and realize
theirs discrete representations. Accordingly, a homogenous point process could
be equivalently described by a operation of a Toeplitz matrix on a vector,
which is accomplished by making use of fast Fourier transformation. The
algorithm could be applied widely in the cosmic statistics to tackle large data
sets. Especially, we demonstrate this novel technique using the spherical,
cubic and cylinder counts in cells respectively. The numerical test shows that
the algorithm produces an excellent agreement with the expected results.
Moreover, the algorithm introduces naturally a sharp-filter, which is capable
of suppressing shot noise in weak signals. In the numerical procedures, the
algorithm is somewhat similar to particle-mesh (PM) methods in N-body
simulations. As scaled with , it is significantly faster than the
current particle-based methods, and its computational cost does not relies on
shape or size of sampling cells. In addition, based on this technique, we
propose further a simple fast scheme to compute the second statistics for
cosmic density fields and justify it using simulation samples. Hopefully, the
technique developed here allows us to make a comprehensive study of
non-Guassianity of the cosmic fields in high precision cosmology. A specific
implementation of the algorithm is publicly available upon request to the
author.Comment: 27 pages, 9 figures included. revised version, changes include (a)
adding a new fast algorithm for 2nd statistics (b) more numerical tests
including counts in asymmetric cells, the two-point correlation functions and
2nd variances (c) more discussions on technic
Short-range force between two Higgs bosons
The -wave scattering length and the effective range of the Higgs boson in
Standard Model are studied using effective-field-theory approach. After
incorporating the first-order electroweak correction, the short-range force
between two Higgs bosons remains weakly attractive for GeV. It is
interesting to find that the force range is about two order-of-magnitude larger
than the Compton wavelength of the Higgs boson, almost comparable with the
typical length scale of the strong interaction.Comment: v2, 11 pages, 2 figures, the version accepted for publication in
Phys. Lett.
Reconciling the nonrelativistic QCD prediction and the data
It has been a long-standing problem that the rare electromagnetic decay
process is plagued with both large and negative radiative
and relativistic corrections. To date it remains futile to make a definite
prediction to confront with the branching fraction of
recently measured by the \textsf{CLEO-c} and \textsf{BESIII} Collaborations. In
this work, we investigate the joint perturbative and relativistic correction
(i.e. the correction, where denotes the
characteristic velocity of the charm quark inside the ) for this decay
process, which turns out to be very significant. After incorporating the
contribution from this new ingredient, with the reasonable choice of the input
parameters, we are able to account for the measured decay rates in a
satisfactory degree.Comment: 7 pages, 1 figure, version accepted for publication in PRD R
Can NRQCD explain the transition form factor data?
Unlike the bewildering situation in the form factor,
a widespread view is that perturbative QCD can decently account for the recent
\textsc{BaBar} measurement of transition form
factor. The next-to-next-to-leading order (NNLO) perturbative correction to the
form factor, is investigated in the NRQCD
factorization framework for the first time. As a byproduct, we obtain by far
the most precise order- NRQCD matching coefficient for the
process. After including the substantial negative
order- correction, the good agreement between NRQCD prediction and
the measured form factor is completely ruined over a
wide range of momentum transfer squared. This eminent discrepancy casts some
doubts on the applicability of NRQCD approach to hard exclusive reactions
involving charmonium.Comment: 6 pages, 3 figures and 1 table; adding Eqs.(8) and (9) as well as
some references, correcting errors in Table 1, updating Fig.3 to include the
"light-by-light" contributions, devoting a paragraph to discuss why our
strategy of interpreting the NNLO corrections is justified; Accepted by PR
Next-to-next-to-leading-order QCD corrections to at factories
Within the nonrelativistic QCD (NRQCD) factorization framework, we compute
the long-awaited correction for the exclusive double
charmonium production process at factories, i.e.,
at GeV. For the first time, we confirm that NRQCD
factorization does hold at next-to-next-to-leading-order (NNLO) for exclusive
double charmonium production. It is found that including the NNLO QCD
correction greatly reduces the renormalization scale dependence, and also
implies the reasonable perturbative convergence behavior for this process. Our
state-of-the-art prediction is consistent with the BaBar measurement.Comment: 6 pages, 2 figures, 1 tabl
- …