688 research outputs found

    Pulmonary alveolar type I cell population consists of two distinct subtypes that differ in cell fate.

    Get PDF
    Pulmonary alveolar type I (AT1) cells cover more than 95% of alveolar surface and are essential for the air-blood barrier function of lungs. AT1 cells have been shown to retain developmental plasticity during alveolar regeneration. However, the development and heterogeneity of AT1 cells remain largely unknown. Here, we conducted a single-cell RNA-seq analysis to characterize postnatal AT1 cell development and identified insulin-like growth factor-binding protein 2 (Igfbp2) as a genetic marker specifically expressed in postnatal AT1 cells. The portion of AT1 cells expressing Igfbp2 increases during alveologenesis and in post pneumonectomy (PNX) newly formed alveoli. We found that the adult AT1 cell population contains both Hopx+Igfbp2+ and Hopx+Igfbp2- AT1 cells, which have distinct cell fates during alveolar regeneration. Using an Igfbp2-CreER mouse model, we demonstrate that Hopx+Igfbp2+ AT1 cells represent terminally differentiated AT1 cells that are not able to transdifferentiate into AT2 cells during post-PNX alveolar regeneration. Our study provides tools and insights that will guide future investigations into the molecular and cellular mechanism or mechanisms underlying AT1 cell fate during lung development and regeneration

    Continual Learning with Strong Experience Replay

    Full text link
    Continual Learning (CL) aims at incrementally learning new tasks without forgetting the knowledge acquired from old ones. Experience Replay (ER) is a simple and effective rehearsal-based strategy, which optimizes the model with current training data and a subset of old samples stored in a memory buffer. To further reduce forgetting, recent approaches extend ER with various techniques, such as model regularization and memory sampling. However, the prediction consistency between the new model and the old one on current training data has been seldom explored, resulting in less knowledge preserved when few previous samples are available. To address this issue, we propose a CL method with Strong Experience Replay (SER), which additionally utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer. In our method, the updated model will produce approximate outputs as its original ones, which can effectively preserve the acquired knowledge. Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin

    Edge-level multi-constranint graph pattern matching with lung cancer knowledge graph

    Get PDF
    IntroductionTraditional Graph Pattern Matching (GPM) research mainly focuses on improving the accuracy and efficiency of complex network analysis and fast subgraph retrieval. Despite their ability to return subgraphs quickly and accurately, these methods are limited to their applications without medical data research.MethodsIn order to overcome this limitation, based on the existing research on GPM with the lung cancer knowledge graph, this paper introduces the Monte Carlo method and proposes an edge-level multi-constraint graph pattern matching algorithm TEM with lung cancer knowledge graph. Furthermore, we apply Monte Carlo method to both nodes and edges, and propose a multi-constraint hologram pattern matching algorithm THM with lung cancer knowledge graph.ResultsThe experiments have verified the effectiveness and efficiency of TEM algorithm.DiscussionThis method effectively addresses the complexity of uncertainty in lung cancer knowledge graph, and is significantly better than the existing algorithms on efficiency

    Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning

    Full text link
    Translation-tailored Large language models (LLMs) exhibit remarkable translation capabilities, even competing with supervised-trained commercial translation systems. However, off-target translation remains an unsolved problem, especially for low-resource languages, hindering us from developing accurate LLMs-based translation models. To mitigate the off-target translation problem and enhance the performance of LLMs on translation, recent works have either designed advanced prompting strategies to highlight the functionality of translation instructions or exploited the in-context learning ability of LLMs by feeding few-shot demonstrations. However, these methods essentially do not improve LLM's ability to follow translation instructions, especially the language direction information. In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability (especially the translation direction) of LLMs. Specifically, we first tune LLMs with the maximum likelihood estimation loss on the translation dataset to elicit the basic translation capabilities. In the second stage, we construct instruction-conflicting samples by randomly replacing the translation directions with a wrong one within the instruction, and then introduce an extra unlikelihood loss to learn those samples. Experiments on IWSLT and WMT benchmarks upon the LLaMA model spanning 16 zero-shot directions show that, compared to the competitive baseline -- translation-finetuned LLama, our method could effectively reduce the off-target translation ratio (averagely -53.3\%), thus improving translation quality with average +5.7 SacreBLEU and +16.4 BLEURT. Analysis shows that our method could preserve the model's general task performance on AlpacaEval. Code and models will be released at \url{https://github.com/alphadl/LanguageAware_Tuning}

    Optimization of nano coating to reduce the thermal deformation of ball screws

    Get PDF
    To reduce the thermal deformation of ball screws, the process of nano coating preparation for coating on ball screws to reduce temperature rise and thereby thermal deformation was discussed in this article. Simultaneously, the cooling mechanism was presented. The thermal channels and the relatively even distribution of graphene in the nano coating were observed in scanning electron microscopic images. In terms of the preparation of nano coating, optimization design was carried out to obtain the optimized material ratio and nozzle flow through orthogonal experiment. The influence of design parameters of nano coating on reducing thermal deformation was also discussed. The experimental results show that the maximum temperature rise, thermal deformation, and time to reach thermal balance decreased by 12.5, 69.1, and 46.3%, respectively. The effectiveness of nano coating in reducing thermal deformation was validated experimentally

    Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation

    Full text link
    For multilingual sequence-to-sequence pretrained language models (multilingual Seq2Seq PLMs), e.g. mBART, the self-supervised pretraining task is trained on a wide range of monolingual languages, e.g. 25 languages from commoncrawl, while the downstream cross-lingual tasks generally progress on a bilingual language subset, e.g. English-German, making there exists the cross-lingual data discrepancy, namely \textit{domain discrepancy}, and cross-lingual learning objective discrepancy, namely \textit{task discrepancy}, between the pretrain and finetune stages. To bridge the above cross-lingual domain and task gaps, we extend the vanilla pretrain-finetune pipeline with extra code-switching restore task. Specifically, the first stage employs the self-supervised code-switching restore task as a pretext task, allowing the multilingual Seq2Seq PLM to acquire some in-domain alignment information. And for the second stage, we continuously fine-tune the model on labeled data normally. Experiments on a variety of cross-lingual NLG tasks, including 12 bilingual translation tasks, 36 zero-shot translation tasks, and cross-lingual summarization tasks show our model outperforms the strong baseline mBART consistently. Comprehensive analyses indicate our approach could narrow the cross-lingual sentence representation distance and improve low-frequency word translation with trivial computational cost

    Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation

    Full text link
    Zero-shot translation (ZST), which is generally based on a multilingual neural machine translation model, aims to translate between unseen language pairs in training data. The common practice to guide the zero-shot language mapping during inference is to deliberately insert the source and target language IDs, e.g., for English and for German. Recent studies have shown that language IDs sometimes fail to navigate the ZST task, making them suffer from the off-target problem (non-target language words exist in the generated translation) and, therefore, difficult to apply the current multilingual translation model to a broad range of zero-shot language scenarios. To understand when and why the navigation capabilities of language IDs are weakened, we compare two extreme decoder input cases in the ZST directions: Off-Target (OFF) and On-Target (ON) cases. By contrastively visualizing the contextual word representations (CWRs) of these cases with teacher forcing, we show that 1) the CWRs of different languages are effectively distributed in separate regions when the sentence and ID are matched (ON setting), and 2) if the sentence and ID are unmatched (OFF setting), the CWRs of different languages are chaotically distributed. Our analyses suggest that although they work well in ideal ON settings, language IDs become fragile and lose their navigation ability when faced with off-target tokens, which commonly exist during inference but are rare in training scenarios. In response, we employ unlikelihood tuning on the negative (OFF) samples to minimize their probability such that the language IDs can discriminate between the on- and off-target tokens during training. Experiments spanning 40 ZST directions show that our method reduces the off-target ratio by -48.0% on average, leading to a +9.1 BLEU improvement with only an extra +0.3% tuning cost
    corecore