1,840 research outputs found

    Roles of Pro- and Anti-Inflammatory Cytokines in the Pathogenesis of SLE

    Get PDF
    SLE is an autoimmune inflammatory disease in which various pro- and anti-inflammatory cytokines, including TGF-β, IL-10, BAFF, IL-6, IFN-α, IFN-γ, IL-17, and IL-23, play crucial pathogenic roles. Virtually, all these cytokines can be generated by both innate and adaptive immune cells and exert different effects depending on specific local microenvironment. They can also interact with each other, forming a complex network to maintain delicate immune homeostasis. In this paper, we elaborate on the abnormal secretion and functions of these cytokines in SLE, analyze their potential pathogenic roles, and probe into the possibility of them being utilized as targets for therapy

    Aplikasi Image Thresholding Untuk Segmentasi Objek

    Full text link
    Salah satu operasi di dalam analisis citra adalah segmentasi citra, yaitu memisahkan objek dari latar belakangnya atau dari objek lain yang tidak menjadi perhatian. Metode sementasi yang sederhana adalah dengan operasi pengambangan (thresholding). Operasi pengambangan menghasilkan citra biner, yang dalam hal ini objek yang diacu di-set berwarna putih sedangkan latar belakangnya di-set berwarna hitam (atau sebaliknya bergantung kasusnya). Makalah ini mempresentasikan penggunaan operasi pengambangan untuk melakukan segmentasi objek. Eksperimen dilakukan dengan menggunakan kakas MATLAB. Hasil eksperimen memperlihatkan bahwa pemilihan nilai ambang (threshold) yang tepat sangat menentukan keberhasilan segmentasi

    A predator-prey interaction between a marine Pseudoalteromonas sp. and Gram-positive bacteria

    Get PDF
    Predator-prey interactions play important roles in the cycling of marine organic matter. Here we show that a Gram-negative bacterium isolated from marine sediments (Pseudoalteromonas sp. strain CF6-2) can kill Gram-positive bacteria of diverse peptidoglycan (PG) chemotypes by secreting the metalloprotease pseudoalterin. Secretion of the enzyme requires a Type II secretion system. Pseudoalterin binds to the glycan strands of Gram positive bacterial PG and degrades the PG peptide chains, leading to cell death. The released nutrients, including PG-derived D-amino acids, can then be utilized by strain CF6-2 for growth. Pseudoalterin synthesis is induced by PG degradation products such as glycine and glycine-rich oligopeptides. Genes encoding putative pseudoalterin-like proteins are found in many other marine bacteria. This study reveals a new microbial interaction in the ocean

    Exploring Universal Intrinsic Task Subspace via Prompt Tuning

    Full text link
    Why can pre-trained language models (PLMs) learn universal representations and effectively adapt to broad NLP tasks differing a lot superficially? In this work, we empirically find evidence indicating that the adaptations of PLMs to various few-shot tasks can be reparameterized as optimizing only a few free parameters in a unified low-dimensional intrinsic task subspace, which may help us understand why PLMs could easily adapt to various NLP tasks with small-scale data. To find such a subspace and examine its universality, we propose an analysis pipeline called intrinsic prompt tuning (IPT). Specifically, we resort to the recent success of prompt tuning and decompose the soft prompts of multiple NLP tasks into the same low-dimensional nonlinear subspace, then we learn to adapt the PLM to unseen data or tasks by only tuning parameters in this subspace. In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace. Besides being an analysis tool, IPT could further bring practical benefits, such as improving the prompt tuning stability.Comment: Withdrawn from Findings of ACL 202

    Deep Learning in Single-Cell Analysis

    Full text link
    Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.Comment: 77 pages, 11 figures, 15 tables, deep learning, single-cell analysi

    Survival effect of PDGF-CC rescues neurons from apoptosis in both brain and retina by regulating GSK3β phosphorylation

    Get PDF
    Platelet-derived growth factor CC (PDGF-CC) is the third member of the PDGF family discovered after more than two decades of studies on the original members of the family, PDGF-AA and PDGF-BB. The biological function of PDGF-CC remains largely to be explored. We report a novel finding that PDGF-CC is a potent neuroprotective factor that acts by modulating glycogen synthase kinase 3β (GSK3β) activity. In several different animal models of neuronal injury, such as axotomy-induced neuronal death, neurotoxin-induced neuronal injury, 6-hydroxydopamine–induced Parkinson’s dopaminergic neuronal death, and ischemia-induced stroke, PDGF-CC protein or gene delivery protected different types of neurons from apoptosis in both the retina and brain. On the other hand, loss-of-function assays using PDGF-C null mice, neutralizing antibody, or short hairpin RNA showed that PDGF-CC deficiency/inhibition exacerbated neuronal death in different neuronal tissues in vivo. Mechanistically, we revealed that the neuroprotective effect of PDGF-CC was achieved by regulating GSK3β phosphorylation and expression. Our data demonstrate that PDGF-CC is critically required for neuronal survival and may potentially be used to treat neurodegenerative diseases. Inhibition of the PDGF-CC–PDGF receptor pathway for different clinical purposes should be conducted with caution to preserve normal neuronal functions

    AgentBench: Evaluating LLMs as Agents

    Full text link
    Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.Comment: 55 page
    corecore