8,258 research outputs found

    Neurotropin suppresses inflammatory cytokine expression and cell death through suppression of NF-κB and JNK in hepatocytes.

    Get PDF
    Inflammatory response and cell death in hepatocytes are hallmarks of chronic liver disease, and, therefore, can be effective therapeutic targets. Neurotropin® (NTP) is a drug widely used in Japan and China to treat chronic pain. Although NTP has been demonstrated to suppress chronic pain through the descending pain inhibitory system, the action mechanism of NTP remains elusive. We hypothesize that NTP functions to suppress inflammatory pathways, thereby attenuating disease progression. In the present study, we investigated whether NTP suppresses inflammatory signaling and cell death pathways induced by interleukin-1β (IL-1β) and tumor necrosis factor-α (TNFα) in hepatocytes. NTP suppressed nuclear factor-κB (NF-κB) activation induced by IL-1β and TNFα assessed by using hepatocytes isolated from NF-κB-green fluorescent protein (GFP) reporter mice and an NF-κB-luciferase reporter system. The expression of NF-κB target genes, Il6, Nos2, Cxcl1, ccl5 and Cxcl2 induced by IL-1β and TNFα was suppressed after NTP treatment. We also found that NTP suppressed the JNK phosphorylation induced by IL-1β and TNFα. Because JNK activation contributes to hepatocyte death, we determined that NTP treatment suppressed hepatocyte death induced by IL-1β and TNFα in combination with actinomycin D. Taken together, our data demonstrate that NTP attenuates IL-1β and TNFα-mediated inflammatory cytokine expression and cell death in hepatocytes through the suppression of NF-κB and JNK. The results from the present study suggest that NTP may become a preventive or therapeutic strategy for alcoholic and non-alcoholic fatty liver disease in which NF-κB and JNK are thought to take part

    The Impact of Online Word-of-mouth and Negative Media Exposure on Consumer Habitual Skepticism: The Mediating Effect of Attribution

    Get PDF
    How did habitual skepticism come into being? In this research,the causes of consumer habitual skepticism are explored from the perspective of attribution. We put forward two important antecedent variables, negative online word-of-mouth and negative media exposure. The study results show that the higher the negative word-of-mouth perception is, the higher the stability and controllability of consumer attribution will be, and the higher the degree of consumer habitual skepticism will be. The higher the intensity of negative media exposure is, the higher the stability and controllability of consumer attribution will be, and the higher the degree of consumer habitual skepticism will be. We test this framework through two experiments. Study 1 investigates the influence of negative word-of-mouth spread and media exposure on consumer habitual skepticism. Study 2 investigates the effect of two independent variables on consumer habitual skepticism from an overall point of view and explore the mediation effect of attribution

    LPNL: Scalable Link Prediction with Large Language Models

    Full text link
    Exploring the application of large language models (LLMs) to graph learning is a emerging endeavor. However, the vast amount of information inherent in large graphs poses significant challenges to this process. This work focuses on the link prediction task and introduces LPNL\textbf{LPNL} (Link Prediction via Natural Language), a framework based on large language models designed for scalable link prediction on large-scale heterogeneous graphs. We design novel prompts for link prediction that articulate graph details in natural language. We propose a two-stage sampling pipeline to extract crucial information from the graphs, and a divide-and-conquer strategy to control the input tokens within predefined limits, addressing the challenge of overwhelming information. We fine-tune a T5 model based on our self-supervised learning designed for link prediction. Extensive experimental results demonstrate that LPNL outperforms multiple advanced baselines in link prediction tasks on large-scale graphs

    SLANG: New Concept Comprehension of Large Language Models

    Full text link
    The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of large language models (LLMs). Traditionally anchored to static datasets, these models often struggle to keep up with the rapid linguistic evolution characteristic of online communities. This research aims to bridge this gap by enhancing LLMs' comprehension of the evolving new concepts on the Internet, without the high cost of continual retraining. In pursuit of this goal, we introduce SLANG\textbf{SLANG}, a benchmark designed to autonomously integrate novel data and assess LLMs' ability to comprehend emerging concepts, alongside FOCUS\textbf{FOCUS}, an approach uses causal inference to enhance LLMs to understand new phrases and their colloquial context. Our benchmark and approach involves understanding real-world instances of linguistic shifts, serving as contextual beacons, to form more precise and contextually relevant connections between newly emerging expressions and their meanings. The empirical analysis shows that our causal inference-based approach outperforms the baseline methods in terms of precision and relevance in the comprehension of Internet slang and memes

    Characterization of ovarian clear cell carcinoma using target drug-based molecular biomarkers: implications for personalized cancer therapy

    Get PDF
    Information of antibodies used in immunohistochemistry. Table S2A. Relationship with clinicopathological factors-HGSC. Table S2B. Relationship with clinicopathological factors-CCC. Table S3 Association molecular biomarkers expression and platinum-based chemotherapeutic response. Table S4. Comparison of molecular biomarkers between recurrent and disease-free patients. (DOCX 42 kb

    A Multi-Granularity-Aware Aspect Learning Model for Multi-Aspect Dense Retrieval

    Full text link
    Dense retrieval methods have been mostly focused on unstructured text and less attention has been drawn to structured data with various aspects, e.g., products with aspects such as category and brand. Recent work has proposed two approaches to incorporate the aspect information into item representations for effective retrieval by predicting the values associated with the item aspects. Despite their efficacy, they treat the values as isolated classes (e.g., "Smart Homes", "Home, Garden & Tools", and "Beauty & Health") and ignore their fine-grained semantic relation. Furthermore, they either enforce the learning of aspects into the CLS token, which could confuse it from its designated use for representing the entire content semantics, or learn extra aspect embeddings only with the value prediction objective, which could be insufficient especially when there are no annotated values for an item aspect. Aware of these limitations, we propose a MUlti-granulaRity-aware Aspect Learning model (MURAL) for multi-aspect dense retrieval. It leverages aspect information across various granularities to capture both coarse and fine-grained semantic relations between values. Moreover, MURAL incorporates separate aspect embeddings as input to transformer encoders so that the masked language model objective can assist implicit aspect learning even without aspect-value annotations. Extensive experiments on two real-world datasets of products and mini-programs show that MURAL outperforms state-of-the-art baselines significantly.Comment: Accepted by WSDM2024, updat
    corecore