8 research outputs found

    Attention-Guided Contrastive Role Representations for Multi-Agent Reinforcement Learning

    Full text link
    Real-world multi-agent tasks usually involve dynamic team composition with the emergence of roles, which should also be a key to efficient cooperation in multi-agent reinforcement learning (MARL). Drawing inspiration from the correlation between roles and agent's behavior patterns, we propose a novel framework of **A**ttention-guided **CO**ntrastive **R**ole representation learning for **M**ARL (**ACORM**) to promote behavior heterogeneity, knowledge transfer, and skillful coordination across agents. First, we introduce mutual information maximization to formalize role representation learning, derive a contrastive learning objective, and concisely approximate the distribution of negative pairs. Second, we leverage an attention mechanism to prompt the global state to attend to learned role representations in value decomposition, implicitly guiding agent coordination in a skillful role space to yield more expressive credit assignment. Experiments on challenging StarCraft II micromanagement and Google research football tasks demonstrate the state-of-the-art performance of our method and its advantages over existing approaches. Our code is available at [https://github.com/NJU-RL/ACORM](https://github.com/NJU-RL/ACORM)

    TextBox 2.0: A Text Generation Library with Pre-trained Language Models

    Full text link
    To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers 1313 common text generation tasks and their corresponding 8383 datasets and further incorporates 4545 PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement 44 efficient training strategies and provide 44 generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: https://github.com/RUCAIBox/TextBox.Comment: Accepted by EMNLP 202

    Unveiling the roles of Sertoli cells lineage differentiation in reproductive development and disorders: a review

    Get PDF
    In mammals, gonadal somatic cell lineage differentiation determines the development of the bipotential gonad into either the ovary or testis. Sertoli cells, the only somatic cells in the spermatogenic tubules, support spermatogenesis during gonadal development. During embryonic Sertoli cell lineage differentiation, relevant genes, including WT1, GATA4, SRY, SOX9, AMH, PTGDS, SF1, and DMRT1, are expressed at specific times and in specific locations to ensure the correct differentiation of the embryo toward the male phenotype. The dysregulated development of Sertoli cells leads to gonadal malformations and male fertility disorders. Nevertheless, the molecular pathways underlying the embryonic origin of Sertoli cells remain elusive. By reviewing recent advances in research on embryonic Sertoli cell genesis and its key regulators, this review provides novel insights into sex determination in male mammals as well as the molecular mechanisms underlying the genealogical differentiation of Sertoli cells in the male reproductive ridge

    A Survey of Large Language Models

    Full text link
    Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.Comment: ongoing work; 51 page

    Mn-single-atom nano-multizyme enabled NIR-II photoacoustically monitored, photothermally enhanced ROS storm for combined cancer therapy

    No full text
    Rationale: To realize imaging-guided multi-modality cancer therapy with minimal side effects remains highly challenging. Methods: We devised a bioinspired hollow nitrogen-doped carbon sphere anchored with individually dispersed Mn atoms (Mn/N-HCN) via oxidation polymerization with triton micelle as a soft template, followed by carbonization and annealing. Enzyme kinetic analysis and optical properties were performed to evaluate the imaging-guided photothermally synergized nanocatalytic therapy. Results: Simultaneously mimicking several natural enzymes, namely peroxidase (POD), catalase (CAT), oxidase (OXD), and glutathione peroxidase (GPx), this nano-multizyme is able to produce highly cytotoxic hydroxyl radical (•OH) and singlet oxygen (1 O2) without external energy input through parallel and series catalytic reactions and suppress the upregulated antioxidant (glutathione) in tumor. Furthermore, NIR-II absorbing Mn/N-HCN permits photothermal therapy (PTT), enhancement of CAT activity, and photoacoustic (PA) imaging to monitor the accumulation kinetics of the nanozyme and catalytic process in situ. Both in vitro and in vivo experiments demonstrate that near-infraredII (NIR-II) PA-imaging guided, photothermally enhanced and synergized nanocatalytic therapy is effcient to induce apoptosis of cancerous cells and eradicate tumor tissue. Conclusions: This study not only demonstrates a new method for effective cancer diagnosis and therapy but also provides new insights into designing multi-functional nanozymes.Ministry of Education (MOE)Published versionThis work was financially supported by the National Nature Science Foundation of China (U22A20349, 82120108016, 82071987, and 82001962), the Central Government Guided Local Science and Technology Development Fund Research Project (YDZJSX20231A055), Research Project Supported by Shanxi Scholarship Council of China (No. 2020–177), Fund Program for the Scientific Activities of Selected Returned Overseas Professionals in Shanxi Province (NO: 20200006), Four Batches of Scientific Research Projects of Shanxi Provincial Health Commission (NO: 2020TD11, NO: 2020SYS15, 2020XM10), Key Laboratory of Nano-imaging and Drug-loaded Preparation of Shanxi Province (NO: 202104010910010), and Singapore Ministry of Education (AcRF Tier-2 grant, MOE2019-T2-2–004)

    Structural insights into DNA N6-adenine methylation by the MTA1 complex

    No full text
    Abstract N6-methyldeoxyadenine (6mA) has recently been reported as a prevalent DNA modification in eukaryotes. The Tetrahymena thermophila MTA1 complex consisting of four subunits, namely MTA1, MTA9, p1, and p2, is the first identified eukaryotic 6mA methyltransferase (MTase) complex. Unlike the prokaryotic 6mA MTases which have been biochemically and structurally characterized, the operation mode of the MTA1 complex remains largely elusive. Here, we report the cryogenic electron microscopy structures of the quaternary MTA1 complex in S-adenosyl methionine (SAM)-bound (2.6 Å) and S-adenosyl homocysteine (SAH)-bound (2.8 Å) states. Using an AI-empowered integrative approach based on AlphaFold prediction and chemical cross-linking mass spectrometry, we further modeled a near-complete structure of the quaternary complex. Coupled with biochemical characterization, we revealed that MTA1 serves as the catalytic core, MTA1, MTA9, and p1 likely accommodate the substrate DNA, and p2 may facilitate the stabilization of MTA1. These results together offer insights into the molecular mechanism underpinning methylation by the MTA1 complex and the potential diversification of MTases for N6-adenine methylation
    corecore