7,008 research outputs found
Quantum Electroweak Symmetry Breaking Through Loop Quadratic Contributions
Based on two postulations that (i) the Higgs boson has a large bare mass GeV at the characteristic energy scale which defines
the standard model (SM) in the ultraviolet region, and (ii) quadratic
contributions of Feynman loop diagrams in quantum field theories are physically
meaningful, we show that the SM electroweak symmetry breaking is induced by the
quadratic contributions from loop effects. As the quadratic running of Higgs
mass parameter leads to an additive renormalization, which distinguishes from
the logarithmic running with a multiplicative renormalization, the symmetry
breaking occurs once the sliding energy scale moves from down to a
transition scale at which the additive renormalized Higgs
mass parameter gets to change the sign. With the input of
current experimental data, this symmetry breaking energy scale is found to be
GeV, which provides another basic energy scale for the
SM besides . Studying such a symmetry breaking mechanism could play an
important role in understanding both the hierarchy problem and naturalness
problem. It also provides a possible way to explore the experimental
implications of the quadratic contributions as lies within the
probing reach of the LHC and the future Great Collider.Comment: 10 pages, 2 figures, published versio
End-to-end Learning for Short Text Expansion
Effectively making sense of short texts is a critical task for many real
world applications such as search engines, social media services, and
recommender systems. The task is particularly challenging as a short text
contains very sparse information, often too sparse for a machine learning
algorithm to pick up useful signals. A common practice for analyzing short text
is to first expand it with external information, which is usually harvested
from a large collection of longer texts. In literature, short text expansion
has been done with all kinds of heuristics. We propose an end-to-end solution
that automatically learns how to expand short text to optimize a given learning
task. A novel deep memory network is proposed to automatically find relevant
information from a collection of longer documents and reformulate the short
text through a gating mechanism. Using short text classification as a
demonstrating task, we show that the deep memory network significantly
outperforms classical text expansion methods with comprehensive experiments on
real world data sets.Comment: KDD'201
Determinations of form factors for semileptonic decays and leptoquark constraints
By analyzing all existing measurements for ( ) decays, we find that the determinations of both the vector
form factor and scalar form factor for semileptonic
decays from these measurements are feasible. By taking the
parameterization of the one order series expansion of the and
, is determined to be , and the
shape parameters of and are
and , respectively. Combining with the average
of and lattice calculaltion, the is extracted
to be where the first error is experimental and the
second theoretical. Alternatively, the is extracted to be
by taking the as the value from the global
fit with the unitarity constraint of the CKM matrix. Moreover, using the
obtained form factors by lattice QCD, we re-analyze these
measurements in the context of new physics. Constraints on scalar leptoquarks
are obtained for different final states of semileptonic
decays
Topological and Algebraic Properties of Chernoff Information between Gaussian Graphs
In this paper, we want to find out the determining factors of Chernoff
information in distinguishing a set of Gaussian graphs. We find that Chernoff
information of two Gaussian graphs can be determined by the generalized
eigenvalues of their covariance matrices. We find that the unit generalized
eigenvalue doesn't affect Chernoff information and its corresponding dimension
doesn't provide information for classification purpose. In addition, we can
provide a partial ordering using Chernoff information between a series of
Gaussian trees connected by independent grafting operations. With the
relationship between generalized eigenvalues and Chernoff information, we can
do optimal linear dimension reduction with least loss of information for
classification.Comment: Submitted to Allerton2018, and this version contains proofs of the
propositions in the pape
- β¦