7,680 research outputs found

    Evolution of the RALF Gene Family in Plants: Gene Duplication and Selection Patterns

    Get PDF
    Rapid alkalinization factors (RALFs) are plant small peptides that could induce a rapid pH increase in the medium of plant cell suspension culture and play a critical role in plant development. The evolutionary process of the RALF gene family remains unclear. To obtain details of the phylogeny of these genes, this study characterized RALF genes in Arabidopsis, rice, poplar and maize. Phylogenetic trees, evolutionary patterns and molecular evolutionary rates were used to elucidate the evolutionary process of this gene family. In addition, the different signatures of selection, expression patterns, and subcellular localization of RALFs were also analyzed. We found that the RALF gene family had a rapid birth process after the separation of the eudicot and monocot species about 145 million years ago, that tandem duplication played a dominant role in the expansion of Arabidopsis and rice RALF gene family, and that RALFs were under purifying selection according to estimations of the substitution rates of these genes. We also identified a diverse expression pattern of RALF genes and predominant extracellular localization feature of RALF proteins. Our findings shed light on several key differences in RALF gene family evolution among the plant species, which may provide a scaffold for future functional analysis of this family

    Coupling the valley degree of freedom to antiferromagnetic order

    Full text link
    Conventional electronics are based invariably on the intrinsic degrees of freedom of an electron, namely, its charge and spin. The exploration of novel electronic degrees of freedom has important implications in both basic quantum physics and advanced information technology. Valley as a new electronic degree of freedom has received considerable attention in recent years. In this paper, we develop the theory of spin and valley physics of an antiferromagnetic honeycomb lattice. We show that by coupling the valley degree of freedom to antiferromagnetic order, there is an emergent electronic degree of freedom characterized by the product of spin and valley indices, which leads to spin-valley dependent optical selection rule and Berry curvature-induced topological quantum transport. These properties will enable optical polarization in the spin-valley space, and electrical detection/manipulation through the induced spin, valley and charge fluxes. The domain walls of an antiferromagnetic honeycomb lattice harbors valley-protected edge states that support spin-dependent transport. Finally, we employ first principles calculations to show that the proposed optoelectronic properties can be realized in antiferromagnetic manganese chalcogenophosphates (MnPX_3, X = S, Se) in monolayer form.Comment: 6 pages, 5 figure

    SkillNet-X: A Multilingual Multitask Model with Sparsely Activated Skills

    Full text link
    Traditional multitask learning methods basically can only exploit common knowledge in task- or language-wise, which lose either cross-language or cross-task knowledge. This paper proposes a general multilingual multitask model, named SkillNet-X, which enables a single model to tackle many different tasks from different languages. To this end, we define several language-specific skills and task-specific skills, each of which corresponds to a skill module. SkillNet-X sparsely activates parts of the skill modules which are relevant either to the target task or the target language. Acting as knowledge transit hubs, skill modules are capable of absorbing task-related knowledge and language-related knowledge consecutively. Based on Transformer, we modify the multi-head attention layer and the feed forward network layer to accommodate skill modules. We evaluate SkillNet-X on eleven natural language understanding datasets in four languages. Results show that SkillNet-X performs better than task-specific baselines and two multitask learning baselines (i.e., dense joint model and Mixture-of-Experts model). Furthermore, skill pre-training further improves the performance of SkillNet-X on almost all datasets. To investigate the generalization of our model, we conduct experiments on two new tasks and find that SkillNet-X significantly outperforms baselines
    • …
    corecore