147 research outputs found

    Immunochemotherapy achieved a complete response for metastatic adenocarcinoma of unknown primary based on gene expression profiling: a case report and review of the literature

    Get PDF
    BackgroundCancer of unknown primary (CUP) is a malignant and aggressive tumor whose primary origin is still unknown despite thorough evaluation. CUP can be life-threatening with a median overall survival of less than 1 year based on empirical chemotherapy. Gene detection technology advances the driver gene detection of malignant tumors and the appropriate precise therapy. Immunotherapy has ushered in a new era in cancer therapy, changing the way advanced tumors, including CUP, are treated. Combined with comprehensive clinical and pathological investigations, molecular analysis of the original tissue and detection of potential driver mutations may provide therapeutic recommendations for CUP.Case presentationA 52-year-old female was admitted to hospital for dull abdominal pain, with peripancreatic lesions below the caudate lobe of the liver and posterior peritoneal lymph nodes enlargement. Conventional biopsy under endoscopic ultrasonography and laparoscopic biopsy both revealed poorly differentiated adenocarcinoma based on immunohistochemical series. To help identify tumor origin and molecular characteristics, 90-gene expression assay, tumor gene expression profiling with Next-generation sequencing (NGS) method and Immunohistochemical expression of PD-L1 were employed. Although no gastroesophageal lesions discovered by gastroenteroscopy, the 90-gene expression assay yielded a similarity score and prompted the most likely primary site was gastric/esophagus cancer. NGS revealed high TMB (19.3mutations/Mb) but no druggable driver genes identified. The Dako PD-L1 22C3 assay IHC assay for PD-L1 expression revealed a tumor proportion score (TPS) of 35%. Given the presence of negative predictive biomarkers for immunotherapy, including adenomatous polyposis coli (APC) c.646C>T mutation at exon 7 and Janus kinase 1(JAK1), the patient received immunochemotherapy instead of immunotherapy alone. She was successfully treated with nivolumab plus carboplatin and albumin-bound nanoparticle paclitaxel for six cycles and nivolumab maintenance, which achieved a complete response (CR) maintained for 2 years without severe adverse events.ConclusionsThis case highlights the value of multidisciplinary diagnosis and individual precision treatment in CUP. Further investigation is needed as an individualized treatment approach combining immunotherapy and chemotherapy based on tumor molecular characteristics and immunotherapy predictors is expected to improve the outcome of CUP therapy

    Interaction of autophagy with microRNAs and their potential therapeutic implications in human cancers

    Get PDF
    AbstractAutophagy is a tightly regulated intracellular self-digestive process involving the lysosomal degradation of cytoplasmic organelles and proteins. A number of studies have shown that autophagy is dysregulated in cancer initiation and progression, or cancer cells under various stress conditions. As a catabolic pathway conserved among eukaryotes, autophagy is regulated by the autophagy related genes and pathways. MicroRNAs (miRNAs) are small, non-coding endogenous RNAs that may regulate almost every cellular process including autophagy. And autophagy is also involved in the regulation of miRNAs expression and homeostasis. Here we reviewed some literatures on the interaction of miRNAs with autophagy and the application of miRNAs-mediated autophagic networks as a promising target in pre-clinical cancer models. Furthermore, strategies of miRNAs delivery for miRNAs-based anti-cancer therapy will also be summarized and discussed

    Thrust: Adaptively Propels Large Language Models with External Knowledge

    Full text link
    Although large-scale pre-trained language models (PTLMs) are shown to encode rich knowledge in their model parameters, the inherent knowledge in PTLMs can be opaque or static, making external knowledge necessary. However, the existing information retrieval techniques could be costly and may even introduce noisy and sometimes misleading knowledge. To address these challenges, we propose the instance-level adaptive propulsion of external knowledge (IAPEK), where we only conduct the retrieval when necessary. To achieve this goal, we propose measuring whether a PTLM contains enough knowledge to solve an instance with a novel metric, Thrust, which leverages the representation distribution of a small number of seen instances. Extensive experiments demonstrate that thrust is a good measurement of PTLM models' instance-level knowledgeability. Moreover, we can achieve significantly higher cost-efficiency with the Thrust score as the retrieval indicator than the naive usage of external knowledge on 88% of the evaluated tasks with 26% average performance improvement. Such findings shed light on the real-world practice of knowledge-enhanced LMs with a limited knowledge-seeking budget due to computation latency or costs.Comment: 13 pages, 6 figure

    Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models

    Full text link
    Retrieval-augmented language models (RALMs) represent a substantial advancement in the capabilities of large language models, notably in reducing factual hallucination by leveraging external knowledge sources. However, the reliability of the retrieved information is not always guaranteed. The retrieval of irrelevant data can lead to misguided responses, and potentially causing the model to overlook its inherent knowledge, even when it possesses adequate information to address the query. Moreover, standard RALMs often struggle to assess whether they possess adequate knowledge, both intrinsic and retrieved, to provide an accurate answer. In situations where knowledge is lacking, these systems should ideally respond with "unknown" when the answer is unattainable. In response to these challenges, we introduces Chain-of-Noting (CoN), a novel approach aimed at improving the robustness of RALMs in facing noisy, irrelevant documents and in handling unknown scenarios. The core idea of CoN is to generate sequential reading notes for retrieved documents, enabling a thorough evaluation of their relevance to the given question and integrating this information to formulate the final answer. We employed ChatGPT to create training data for CoN, which was subsequently trained on an LLaMa-2 7B model. Our experiments across four open-domain QA benchmarks show that RALMs equipped with CoN significantly outperform standard RALMs. Notably, CoN achieves an average improvement of +7.9 in EM score given entirely noisy retrieved documents and +10.5 in rejection rates for real-time questions that fall outside the pre-training knowledge scope.Comment: Preprin

    PIVOINE: Instruction Tuning for Open-world Information Extraction

    Full text link
    We consider the problem of Open-world Information Extraction (Open-world IE), which extracts comprehensive entity profiles from unstructured texts. Different from the conventional closed-world setting of Information Extraction (IE), Open-world IE considers a more general situation where entities and relations could be beyond a predefined ontology. More importantly, we seek to develop a large language model (LLM) that is able to perform Open-world IE to extract desirable entity profiles characterized by (possibly fine-grained) natural language instructions. We achieve this by finetuning LLMs using instruction tuning. In particular, we construct INSTRUCTOPENWIKI, a substantial instruction tuning dataset for Open-world IE enriched with a comprehensive corpus, extensive annotations, and diverse instructions. We finetune the pretrained BLOOM models on INSTRUCTOPENWIKI and obtain PIVOINE, an LLM for Open-world IE with strong instruction-following capabilities. Our experiments demonstrate that PIVOINE significantly outperforms traditional closed-world methods and other LLM baselines, displaying impressive generalization capabilities on both unseen instructions and out-of-ontology cases. Consequently, PIVOINE emerges as a promising solution to tackle the open-world challenge in IE effectively

    Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models

    Full text link
    Fully-parametric language models generally require a huge number of model parameters to store the necessary knowledge for solving multiple natural language tasks in zero/few-shot settings. In addition, it is hard to adapt to the evolving world knowledge without the costly model re-training. In this paper, we develop a novel semi-parametric language model architecture, Knowledge-in-Context (KiC), which empowers a parametric text-to-text language model with a knowledge-rich external memory. Specifically, the external memory contains six different types of knowledge: entity, dictionary, commonsense, event, script, and causality knowledge. For each input instance, the KiC model adaptively selects a knowledge type and retrieves the most helpful pieces of knowledge. The input instance along with its knowledge augmentation is fed into a text-to-text model (e.g., T5) to generate the output answer, where both the input and the output are in natural language forms after prompting. Interestingly, we find that KiC can be identified as a special mixture-of-experts (MoE) model, where the knowledge selector plays the role of a router that is used to determine the sequence-to-expert assignment in MoE. This key observation inspires us to develop a novel algorithm for training KiC with an instance-adaptive knowledge selector. As a knowledge-rich semi-parametric language model, KiC only needs a much smaller parametric part to achieve superior zero-shot performance on unseen tasks. By evaluating on 40+ different tasks, we show that KiC_Large with 770M parameters easily outperforms large language models (LMs) that are 4-39x larger by a large margin. We also demonstrate that KiC exhibits emergent abilities at a much smaller model scale compared to the fully-parametric models

    From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited

    Full text link
    Graph-based semi-supervised learning (GSSL) has long been a hot research topic. Traditional methods are generally shallow learners, based on the cluster assumption. Recently, graph convolutional networks (GCNs) have become the predominant techniques for their promising performance. In this paper, we theoretically discuss the relationship between these two types of methods in a unified optimization framework. One of the most intriguing findings is that, unlike traditional ones, typical GCNs may not jointly consider the graph structure and label information at each layer. Motivated by this, we further propose three simple but powerful graph convolution methods. The first is a supervised method OGC which guides the graph convolution process with labels. The others are two unsupervised methods: GGC and its multi-scale version GGCM, both aiming to preserve the graph structure information during the convolution process. Finally, we conduct extensive experiments to show the effectiveness of our methods

    Numerical simulation and experimental investigation of diesel fuel reforming over a Pt/CeO<sub>2</sub>-Al<sub>2</sub>O<sub>3</sub> catalyst

    Get PDF
    In order to benefit from a realistic hydrogen production device equipped on a vehicle, issues with the effects of the process parameters on H2 and CO yield need to be resolved. In this study, a reduced mechanism for n-heptane (as a surrogate diesel) reforming over a Pt/CeO2-Al2O3 catalyst is adopted to investigate the effects of the process parameters on H2 and CO yield, and the preferred process parameters are concluded. In addition, the comparison of reforming bench tests of diesel fuel and n-heptane under typical diesel engine operating conditions is conducted. The n-heptane reforming simulation results show that the maximum H2 and CO yield moves toward unity with the decreased GHSV and increased reaction temperature, and the GHSV of 10,000 1/h, O2/C ratio of 0.6 and reaction temperature of 500 &deg;C is preferable. The contrast experiments reveal that the change trend of H2 and CO yield displays consistence, although the difference of the average H2 and CO yield results is obvious. The characteristics of n-heptane reforming can represent H2 and CO yield features of diesel fuel reforming at typical reaction temperatures in a way
    corecore