684 research outputs found

    The ribosomal protein L32-2 (RPL32-2) of S. pombe exhibits a novel extraribosomal function by acting as a potential transcriptional regulator

    Get PDF
    AbstractRibosomal proteins play important roles in stabilizing the rRNA structure to facilitate protein synthesis in ribosome. In the present study, we analyzed the potential extraribosomal function of the ribosomal protein L32-2 (RPL32-2), which was expressed by a gene clone isolated from a cDNA library of Schizosaccharomyces pombe (S. pombe). RPL32-2 fused with the GAL4 DNA-bind domain or the GAL4 transcriptional activating domain could, respectively, activate transcriptions of reporter genes in yeast strain AH109. The RPL32-2 mutants with truncation of either the N- or the C-terminal domain resulted in abolishment of this regulatory effect. The DNA binding site for RPL32-2 of S. pombe was identified by using a random oligonucleotide selection strategy and gel motility shift assay and Western blotting confirmed its binding specificity. Moreover, we found RPL32-2 was also able to interact with a to-be-identified AT sequence binding protein. These data suggest that RPL32-2 of S. pombe, besides its ribosomal function, may also act as a potential transcriptional regulator in nucleus

    Potential Roles of Matrix Metalloproteinases in Malignant Mesothelioma

    Get PDF
    Malignant mesothelioma (MM) is a rare, aggressive, and highly lethal cancer that is primary induced by exposure to asbestos fibers. Matrix metalloproteinases (MMPs) are a family of zinc-dependent endopeptidases that are involved in metastasis, and their overexpression correlates with tumor cell invasion and metastasis because they degrade the extracellular matrix (ECM) and process adhesion and cytoskeletal proteins, growth factors, chemokines, and cytokines. Recent evidence has shown that MMPs participate in MM progression, indicating that they are potential novel biomarkers and attractive targets for cancer therapy. In this chapter, we will describe MMPs in carcinogenic mechanisms based on in vivo and in vitro experimental evidence, outline the clinical findings, and speculate the possible roles of MMPs in MM

    Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

    Full text link
    Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained vision models. However, with the exponential growth of model sizes, the conventional full fine-tuning, which needs to store a individual network copy for each tasks, leads to increasingly huge storage and transmission overhead. Adapter-based Parameter-Efficient Tuning (PET) methods address this challenge by tuning lightweight adapters inserted into the frozen pre-trained models. In this paper, we investigate how to make adapters even more efficient, reaching a new minimum size required to store a task-specific fine-tuned network. Inspired by the observation that the parameters of adapters converge at flat local minima, we find that adapters are resistant to noise in parameter space, which means they are also resistant to low numerical precision. To train low-precision adapters, we propose a computational-efficient quantization method which minimizes the quantization error. Through extensive experiments, we find that low-precision adapters exhibit minimal performance degradation, and even 1-bit precision is sufficient for adapters. The experimental results demonstrate that 1-bit adapters outperform all other PET methods on both the VTAB-1K benchmark and few-shot FGVC tasks, while requiring the smallest storage size. Our findings show, for the first time, the significant potential of quantization techniques in PET, providing a general solution to enhance the parameter efficiency of adapter-based PET methods. Code: https://github.com/JieShibo/PETL-ViTComment: Accepted to ICCV 202

    Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes

    Full text link
    Tensor decomposition is an important tool for multiway data analysis. In practice, the data is often sparse yet associated with rich temporal information. Existing methods, however, often under-use the time information and ignore the structural knowledge within the sparsely observed tensor entries. To overcome these limitations and to better capture the underlying temporal structure, we propose Dynamic EMbedIngs fOr dynamic Tensor dEcomposition (DEMOTE). We develop a neural diffusion-reaction process to estimate dynamic embeddings for the entities in each tensor mode. Specifically, based on the observed tensor entries, we build a multi-partite graph to encode the correlation between the entities. We construct a graph diffusion process to co-evolve the embedding trajectories of the correlated entities and use a neural network to construct a reaction process for each individual entity. In this way, our model can capture both the commonalities and personalities during the evolution of the embeddings for different entities. We then use a neural network to model the entry value as a nonlinear function of the embedding trajectories. For model estimation, we combine ODE solvers to develop a stochastic mini-batch learning algorithm. We propose a stratified sampling method to balance the cost of processing each mini-batch so as to improve the overall efficiency. We show the advantage of our approach in both simulation study and real-world applications. The code is available at https://github.com/wzhut/Dynamic-Tensor-Decomposition-via-Neural-Diffusion-Reaction-Processes

    ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings

    Full text link
    Augmenting large language models (LLMs) with external tools has emerged as a promising approach to solving complex problems. However, traditional methods, which finetune LLMs with tool demonstration data, can be both costly and restricted to a predefined set of tools. Recent in-context learning paradigm alleviates these issues, but the limited context length only allows for a few shots of demonstrations, leading to suboptimal understandings of the tools. Moreover, when there are numerous tools to choose from, in-context learning could completely fail to work. In this paper, we propose an alternative approach, ToolkenGPT\textbf{ToolkenGPT}, which combines the benefits of both sides. Our approach represents each tool‾\underline{tool} as a token‾\underline{ken} (toolken\textit{toolken}) and learns an embedding for it, enabling tool calls in the same way as generating a regular word token. Once a toolken is triggered, the LLM is prompted to complete arguments for the tool to execute. ToolkenGPT offers the flexibility to plug in an arbitrary number of tools by expanding the set of toolkens on the fly. In addition, it improves tool use by allowing extensive demonstration data for learning the toolken embeddings. In diverse domains, including numerical reasoning, knowledge-based question answering, and embodied plan generation, our approach effectively augments LLMs with tools and substantially outperforms various latest baselines. ToolkenGPT demonstrates the promising ability to use relevant tools from a large tool set in complex scenarios
    • …
    corecore