4,071 research outputs found
Correlation between the Charged Current Interactions of Light and Heavy Majorana Neutrinos
The evidence for neutrino oscillations implies that three neutrino flavors
(\nu_e, \nu_\mu, \nu_\tau) must have different mass states (\nu_1, \nu_2,
\nu_3). The most popular idea of generating tiny masses of \nu_i is to
introduce three heavy Majorana neutrinos N_i (for i = 1, 2, 3) into the
standard model and implement the seesaw mechanism. In this approach the
neutrino mixing matrix V appearing in the charged current interactions of \nu_i
is not unitary, and the strength of unitarity violation of V is associated with
the matrix R which describes the strength of charged current interactions of
N_i. We present an explicit parametrization of the correlation between V and R
in terms of nine rotation angles and nine phase angles, which can be measured
or constrained in the precision neutrino oscillation experiments and by
exploring possible signatures of N_i at the LHC and ILC. Two special but viable
scenarios, the Type-I seesaw model with two heavy Majorana neutrinos and the
Type-II seesaw model with one heavy Majorana neutrino and one Higgs triplet,
are taken into account to illustrate the simplified V-R correlation. The
implications of R \neq 0 on the low-energy neutrino phenomenology are also
discussed. In particular, we demonstrate that the non-unitarity of V is
possible to give rise to an appreciable CP-violating asymmetry between \nu_\mu
-> \nu_\tau and \bar{\nu}_\mu -> \bar{\nu}_\tau oscillations with short or
medium baselines.Comment: RevTex 13 pages (1 figure). Some minor corrections and changes.
Accepted for publication in Phys. Lett.
Text Generation with Efficient (Soft) Q-Learning
Maximum likelihood estimation (MLE) is the predominant algorithm for training
text generation models. This paradigm relies on direct supervision examples,
which is not applicable to many applications, such as generating adversarial
attacks or generating prompts to control language models. Reinforcement
learning (RL) on the other hand offers a more flexible solution by allowing
users to plug in arbitrary task metrics as reward. Yet previous RL algorithms
for text generation, such as policy gradient (on-policy RL) and Q-learning
(off-policy RL), are often notoriously inefficient or unstable to train due to
the large sequence space and the sparse reward received only at the end of
sequences. In this paper, we introduce a new RL formulation for text generation
from the soft Q-learning perspective. It further enables us to draw from the
latest RL advances, such as path consistency learning, to combine the best of
on-/off-policy updates, and learn effectively from sparse reward. We apply the
approach to a wide range of tasks, including learning from noisy/negative
examples, adversarial attacks, and prompt generation. Experiments show our
approach consistently outperforms both task-specialized algorithms and the
previous RL methods. On standard supervised tasks where MLE prevails, our
approach also achieves competitive performance and stability by training text
generation from scratch.Comment: Code available at
https://github.com/HanGuo97/soft-Q-learning-for-text-generatio
Overlapping and Distinct Roles of HAM Family Genes in Arabidopsis Shoot Meristems
In Arabidopsis shoot apical meristems (SAMs), a well-characterized regulatory loop between WUSCHEL (WUS) and CLAVATA3 (CLV3) maintains stem cell homeostasis by regulating the balance between cell proliferation and cell differentiation. WUS proteins, translated in deep cell layers, move into the overlaying stem cells to activate CLV3. The secreted peptide CLV3 then regulates WUS levels through a ligand-receptor mediated signaling cascade. CLV3 is specifically expressed in the stem cells and repressed in the deep cell layers despite presence of the WUS activator, forming an apical-basal polarity along the axis of the SAM. Previously, we proposed and validated a hypothesis that the HAIRY MERISTEM (HAM) family genes regulate this polarity, keeping the expression of CLV3 off in interior cells of the SAM. However, the specific role of each individual member of the HAM family in this process remains to be elucidated. Combining live imaging and molecular genetics, we have dissected the conserved and distinct functions of different HAM family members in control of CLV3 patterning in the SAMs and in the de novo shoot stem cell niches as well
catena-Poly[[diaquaÂzinc(II)]-μ-4,4′-(methylÂenedioxy)Âdibenzoato]
In the title complex, [Zn(C15H10O6)(H2O)2]n, the ZnII atom is located on a twofold rotation axis and exhibits a distorted tetrahedral coordination environment defined by two O atoms from two 4,4′-(methylÂenedioxy)Âdibenzoate ligands and two O atoms from two coordinated water molÂecules. In the crystal structure, molÂecules are linked into a three-dimensional framework by O—H⋯O hydrogen bonds and C—H⋯π interÂactions
MPCFormer: fast, performant and private Transformer inference with MPC
Enabling private inference is crucial for many cloud inference services that
are based on Transformer models. However, existing private inference solutions
for Transformers can increase the inference latency by more than 60x or
significantly compromise the quality of inference results. In this paper, we
design the framework MPCFORMER using secure multi-party computation (MPC) and
Knowledge Distillation (KD). It can be used in tandem with many specifically
designed MPC-friendly approximations and trained Transformer models. MPCFORMER
significantly speeds up Transformer model inference in MPC settings while
achieving similar ML performance to the input model. We evaluate MPCFORMER with
various settings in MPC. On the IMDb dataset, we achieve similar performance to
BERTBASE, while being 5.3x faster. On the GLUE benchmark, we achieve 97%
performance of BERTBASE with a 2.2x speedup. We show that MPCFORMER remains
effective with different trained Transformer weights such as ROBERTABASE and
larger models including BERTLarge. In particular, we achieve similar
performance to BERTLARGE, while being 5.93x faster on the IMDb dataset
- …