43 research outputs found

    A Long-Acting Curcumin Nanoparticle/In Situ Hydrogel Composite for the Treatment of Uveal Melanoma

    Get PDF
    Uveal melanoma (UM) is the most common primary intraocular tumor in adults with high mortality. In order to improve prognosis and survival of UM patients, it is critical to inhibit tumor progression and metastasis as early as possible after the initial presentation/diagnosis of the disease. Sustained local delivery of antitumor therapeutics in the posterior region can potentially achieve long-term UM inhibition, improve target therapeutic delivery to the posterior segments, as well as reduce injection frequency and hence improved patient compliance. To address the highly unmet medical need in UM therapy, a bioinspired in situ gelling hydrogel system composed of naturally occurring biopolymers collagen and hyaluronic acid was developed in the present research. Curcumin with anti-cancer progression, anti-metastasis effects, and good ocular safety was chosen as the model therapeutic. The developed in situ gelling delivery system gelled at 37 °C within two minutes and demonstrated excellent biocompatibility and slow degradation. The curcumin-loaded nanoparticle/hydrogel composite was able to sustain release payload for up to four weeks. The optimized nanoparticle/hydrogel composite showed effective inhibition of human UM cell proliferation. This novel nanoparticle/in situ hydrogel composite demonstrated a great potential for the treatment of the rare and devastating intraocular cancer

    DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion Recognition

    Full text link
    This paper presents our pioneering effort for emotion recognition in conversation (ERC) with pre-trained language models. Unlike regular documents, conversational utterances appear alternately from different parties and are usually organized as hierarchical structures in previous work. Such structures are not conducive to the application of pre-trained language models such as XLNet. To address this issue, we propose an all-in-one XLNet model, namely DialogXL, with enhanced memory to store longer historical context and dialog-aware self-attention to deal with the multi-party structures. Specifically, we first modify the recurrence mechanism of XLNet from segment-level to utterance-level in order to better model the conversational data. Second, we introduce dialog-aware self-attention in replacement of the vanilla self-attention in XLNet to capture useful intra- and inter-speaker dependencies. Extensive experiments are conducted on four ERC benchmarks with mainstream models presented for comparison. The experimental results show that the proposed model outperforms the baselines on all the datasets. Several other experiments such as ablation study and error analysis are also conducted and the results confirm the role of the critical modules of DialogXL.Comment: Accepted by AAAI 2021 main conferenc

    Joint Generator-Ranker Learning for Natural Language Generation

    Full text link
    Generate-then-rank is a widely used mechanism for text generation, where a generator produces multiple text candidates and a ranker chooses the best one among the text candidates. However, existing methods usually train the generator and the ranker individually, neglecting the mutual feedback that could further enhance the generation quality. To tackle this limitation, we propose JGR, a novel joint training algorithm that integrates the generator and the ranker in a single framework. JGR optimizes the generator with a hybrid objective that combines data likelihood and ranker reward, and trains the ranker with a contrastive loss that compares the generator outputs. By iteratively updating the generator and the ranker, JGR can effectively harmonize their learning and enhance their quality jointly. We evaluate JGR on various text generation tasks and demonstrate that it surpasses existing methods on four public datasets across three common generation scenarios. Our code and models are publicly available at https://github.com/microsoft/ProphetNet/tree/master/JGR

    Small LLMs Are Weak Tool Learners: A Multi-LLM Agent

    Full text link
    Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs, empowering them to interact with external tools (e.g., APIs, functions) and complete various tasks in a self-directed fashion. The challenge of tool use demands that LLMs not only understand user queries and generate answers accurately but also excel in task planning, tool invocation, and result summarization. While traditional works focus on training a single LLM with all these capabilities, performance limitations become apparent, particularly with smaller models. To overcome these challenges, we propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer. Each component is implemented by a single LLM that focuses on a specific capability and collaborates with others to accomplish the task. This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability. To effectively train this framework, we introduce a two-stage training paradigm. First, we fine-tune a backbone LLM on the entire dataset without discriminating sub-tasks, providing the model with a comprehensive understanding of the task. Second, the fine-tuned LLM is used to instantiate the planner, caller, and summarizer respectively, which are continually fine-tuned on respective sub-tasks. Evaluation across various tool-use benchmarks illustrates that our proposed multi-LLM framework surpasses the traditional single-LLM approach, highlighting its efficacy and advantages in tool learning.Comment: On progress, github repo: https://github.com/X-PLUG/Multi-LLM-Agen

    ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models

    Full text link
    Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available

    The World Federation of Democratic Youth and Bruno Bernini's encounter with Mao's China

    Get PDF
    This paper examines the role played by adult-led youth groups in providing avenues for early encounters between Italian and Chinese Communists in the '50s. In particular, it focuses on the links built up within international organisations linked to the Soviet-sponsored peace movement at a time when direct exchange between the Italian and Chinese Communist parties had yet to start. Relying on a large variety of primary and secondary sources, some of which have never been used before, I provide evidence of how participation in Soviet-led international organisations made early political contacts and interactions possible. The focus is on Bruno Bernini, whose personal experience in China is examined within the context of the World Federation of Democratic Youth's policies and initiatives in the early and mid-'50s

    Local Delivery Strategies for Peptides and Proteins into the CNS: Status Quo, Challenges, and Future Perspectives

    No full text
    Over the past decades, peptides and proteins have been increasingly important in the treatment of various human diseases and conditions owing to their specificity, potency, and minimized off-target toxicity. However, the existence of the practically impermeable blood brain barrier (BBB) limits the entry of macromolecular therapeutics into the central nervous systems (CNS). Consequently, clinical translation of peptide/protein therapeutics for the treatment of CNS diseases has been limited. Over the past decades, developing effective delivery strategies for peptides and proteins has gained extensive attention, in particular with localized delivery strategies, due to the fact that they are capable of circumventing the physiological barrier to directly introduce macromolecular therapeutics into the CNS to improve therapeutic effects and reduce systemic side effects. Here, we discuss various local administration and formulation strategies that have shown successes in the treatment of CNS diseases using peptide/protein therapeutics. Lastly, we discuss challenges and future perspectives of these approaches

    Passenger satisfaction evaluation model for Urban rail transit: A structural equation modeling based on partial least squares

    No full text
    The rail transit has played an important role in economic vitality of the urban area. Providing services with high levels of quality is essential in order to promote public transportation by customizing the users of the services, and to reduce traffic congestion by shifting people away from private car use. For this reason, it is essential to understand passenger satisfaction with urban rail transit from a quantitative and systematic perspective. This paper borrows the fundamental concept of the American Customer Satisfaction Index (ACSI) model to establish a passenger satisfaction evaluation model for urban rail transit in China. A structural equation modeling (SEM) method and its parameter estimation method: Partial Least Squares (PLS), are applied to estimate the proposed model. An evaluation indicator system including three levels of indicators is established to measure passengers’ satisfaction on the services offered by the rail transit operation companies. The satisfaction index is obtained to quantize the degree of passenger satisfaction. The IPA matrix is used as an assist tool to show the advantages and disadvantages of the services of rail transit. Suzhou rail transit line 1 was used as a case study, four models with different latent constructs or estimation methods were built and compared, to demonstrate the proposed PSI model based on PLS estimation method was reliable and the sign and magnitude of parameters were reasonable. The causality between passenger satisfaction and its influence factors were confirmed by path coefficients of the model
    corecore