80 research outputs found

    Continual Instruction Tuning for Large Multimodal Models

    Full text link
    Instruction tuning is now a widely adopted approach to aligning large multimodal models (LMMs) to follow human intent. It unifies the data format of vision-language tasks, enabling multi-task joint training. However, vision-language tasks are constantly being created in practice. Instead of always re-training LMMs when new tasks arrive, continual learning offers flexibility for models to continually and efficiently exploit the evolving data. This work aims to explore the following two questions: 1) Do LMMs still suffer from catastrophic forgetting in continual instruction tuning? 2) Are the existing three classes of continual learning methods still applicable to the continual instruction tuning of LMMs? An extensive study is conducted to address the above questions. First, we establish the first benchmark in this setting and reveal that catastrophic forgetting is still observed when continually instruction-tuning LMMs. However, the multi-task joint instruction tuning can facilitate the model's continual learning ability and mitigate forgetting. Second, we integrate and adapt classic continual learning methods to our context, demonstrating the efficacy of data replay and model expansion strategies across diverse scenarios. In contrast, regularization-based methods only perform well on models that have been jointly instruction-tuned on multiple tasks. Third, we delve into the correlation and forgetting dynamics between vision-language task pairs and propose task-similarity-informed regularization and model expansion methods for continual instruction tuning of LMMs. Experimental results show that our approach consistently boosts the model's performance

    Effidit: Your AI Writing Assistant

    Full text link
    In this technical report, we introduce Effidit (Efficient and Intelligent Editing), a digital writing assistant that facilitates users to write higher-quality text more efficiently by using artificial intelligence (AI) technologies. Previous writing assistants typically provide the function of error checking (to detect and correct spelling and grammatical errors) and limited text-rewriting functionality. With the emergence of large-scale neural language models, some systems support automatically completing a sentence or a paragraph. In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME). In the text completion category, Effidit supports generation-based sentence completion, retrieval-based sentence completion, and phrase completion. In contrast, many other writing assistants so far only provide one or two of the three functions. For text polishing, we have three functions: (context-aware) phrase polishing, sentence paraphrasing, and sentence expansion, whereas many other writing assistants often support one or two functions in this category. The main contents of this report include major modules of Effidit, methods for implementing these modules, and evaluation results of some key methods.Comment: Technical report for Effidit. arXiv admin note: text overlap with arXiv:2202.0641

    Symmetry induced selective excitation of topological states in SSH waveguide arrays

    Full text link
    The investigation of topological state transition in carefully designed photonic lattices is of high interest for fundamental research, as well as for applied studies such as manipulating light flow in on-chip photonic systems. Here, we report on topological phase transition between symmetric topological zero modes (TZM) and antisymmetric TZMs in Su-Schrieffer-Heeger (SSH) mirror symmetric waveguides. The transition of TZMs is realized by adjusting the coupling ratio between neighboring waveguide pairs, which is enabled by selective modulation of the refractive index in the waveguide gaps. Bi-directional topological transitions between symmetric and antisymmetric TZMs can be achieved with our proposed switching strategy. Selective excitation of topological edge mode is demonstrated owing to the symmetry characteristics of the TZMs. The flexible manipulation of topological states is promising for on-chip light flow control and may spark further investigations on symmetric/antisymmetric TZM transitions in other photonic topological frameworks

    Novel Natural Inhibitors of CYP1A2 Identified by in Silico and in Vitro Screening

    Get PDF
    Inhibition of cytochrome P450 (CYP) is a major cause of herb–drug interactions. The CYP1A2 enzyme plays a major role in the metabolism of drugs in humans. Its broad substrate specificity, as well as its inhibition by a vast array of structurally diverse herbal active ingredients, has indicated the possibility of metabolic herb–drug interactions. Therefore nowadays searching inhibitors for CYP1A2 from herbal medicines are drawing much more attention by biological, chemical and pharmological scientists. In our work, a pharmacophore model as well as the docking technology is proposed to screen inhibitors from herbal ingredients data. Firstly different pharmaphore models were constructed and then validated and modified by 202 herbal ingredients. Secondly the best pharmaphore model was chosen to virtually screen the herbal data (a curated database of 989 herbal compounds). Then the hits (147 herbal compounds) were continued to be filtered by a docking process, and were tested in vitro successively. Finally, five of eighteen candidate compounds (272, 284, 300, 616 and 817) were found to have inhibition of CYP1A2 activity. The model developed in our study is efficient for in silico screening of large herbal databases in the identification of CYP1A2 inhibitors. It will play an important role to prevent the risk of herb–drug interactions at an early stage of the drug development process
    • 

    corecore