373 research outputs found

    Mass Spectrometry-based Methods for Phosphorylation Site Mapping of Hyperphosphorylated Proteins Applied to Net1, a Regulator of Exit from Mitosis in Yeast

    Get PDF
    Prior to anaphase in Saccharomyces cerevisiae, Cdc14 protein phosphatase is sequestered within the nucleolus and inhibited by Net1, a component of the RENT complex in budding yeast. During anaphase the RENT complex disassembles, allowing Cdc14 to migrate to the nucleus and cytoplasm where it catalyzes exit from mitosis. The mechanism of Cdc14 release appears to involve the polo-like kinase Cdc5, which is capable of promoting the dissociation of a recombinant Net1·Cdc14 complex in vitro by phosphorylation of Net1. We report here the phosphorylation site mapping of recombinant Net1 (Net1N) and a mutant Net1N allele (Net1N-19m) with 19 serines or threonines mutated to alanine. A variety of chromatographic and mass spectrometric-based strategies were used, including immobilized metal-affinity chromatography, alkaline phosphatase treatment, matrix-assisted laser-desorption post-source decay, and a multidimensional electrospray mass spectrometry-based approach. No one approach was able to identify all phosphopeptides in the tryptic digests of these proteins. Most notably, the presence of a basic residue near the phosphorylated residue significantly hampered the ability of alkaline phosphatase to hydrolyze the phosphate moiety. A major goal of research in proteomics is to identify all proteins and their interactions and post-translational modification states. The failure of any single method to identify all sites in highly phosphorylated Net1N, however, raises significant concerns about how feasible it is to map phosphorylation sites throughout the proteome using existing technologies

    Enhancing Diffusion Models with Text-Encoder Reinforcement Learning

    Full text link
    Text-to-image diffusion models are typically trained to optimize the log-likelihood objective, which presents challenges in meeting specific requirements for downstream tasks, such as image aesthetics and image-text alignment. Recent research addresses this issue by refining the diffusion U-Net using human rewards through reinforcement learning or direct backpropagation. However, many of them overlook the importance of the text encoder, which is typically pretrained and fixed during training. In this paper, we demonstrate that by finetuning the text encoder through reinforcement learning, we can enhance the text-image alignment of the results, thereby improving the visual quality. Our primary motivation comes from the observation that the current text encoder is suboptimal, often requiring careful prompt adjustment. While fine-tuning the U-Net can partially improve performance, it remains suffering from the suboptimal text encoder. Therefore, we propose to use reinforcement learning with low-rank adaptation to finetune the text encoder based on task-specific rewards, referred as \textbf{TexForce}. We first show that finetuning the text encoder can improve the performance of diffusion models. Then, we illustrate that TexForce can be simply combined with existing U-Net finetuned models to get much better results without additional training. Finally, we showcase the adaptability of our method in diverse applications, including the generation of high-quality face and hand images

    Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision

    Full text link
    The rapid evolution of Multi-modality Large Language Models (MLLMs) has catalyzed a shift in computer vision from specialized models to general-purpose foundation models. Nevertheless, there is still an inadequacy in assessing the abilities of MLLMs on low-level visual perception and understanding. To address this gap, we present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment. a) To evaluate the low-level perception ability, we construct the LLVisionQA dataset, consisting of 2,990 diverse-sourced images, each equipped with a human-asked question focusing on its low-level attributes. We then measure the correctness of MLLMs on answering these questions. b) To examine the description ability of MLLMs on low-level information, we propose the LLDescribe dataset consisting of long expert-labelled golden low-level text descriptions on 499 images, and a GPT-involved comparison pipeline between outputs of MLLMs and the golden descriptions. c) Besides these two tasks, we further measure their visual quality assessment ability to align with human opinion scores. Specifically, we design a softmax-based strategy that enables MLLMs to predict quantifiable quality scores, and evaluate them on various existing image quality assessment (IQA) datasets. Our evaluation across the three abilities confirms that MLLMs possess preliminary low-level visual skills. However, these skills are still unstable and relatively imprecise, indicating the need for specific enhancements on MLLMs towards these abilities. We hope that our benchmark can encourage the research community to delve deeper to discover and enhance these untapped potentials of MLLMs. Project Page: https://vqassessment.github.io/Q-Bench.Comment: 25 pages, 14 figures, 9 tables, preprint versio

    Cdc5 influences phosphorylation of Net1 and disassembly of the RENT complex

    Get PDF
    BACKGROUND: In S. cerevisiae, the mitotic exit network (MEN) proteins, including the Polo-like protein kinase Cdc5 and the protein phosphatase Cdc14, are required for exit from mitosis. In pre-anaphase cells, Cdc14 is sequestered to the nucleolus by Net1 as a part of the RENT complex. When cells are primed to exit mitosis, the RENT complex is disassembled and Cdc14 is released from the nucleolus. RESULTS: Here, we show that Cdc5 is necessary to free nucleolar Cdc14 in late mitosis, that elevated Cdc5 activity provokes ectopic release of Cdc14 in pre-anaphase cells, and that the phosphorylation state of Net1 is regulated by Cdc5 during anaphase. Furthermore, recombinant Cdc5 and Xenopus Polo-like kinase can disassemble the RENT complex in vitro by phosphorylating Net1 and thereby reducing its affinity for Cdc14. Surprisingly, although RENT complexes containing Net1 mutants (Net1(7m) and Net1(19m') lacking sites phosphorylated by Cdc5 in vitro are refractory to disassembly by Polo-like kinases in vitro, net1(7m) and net1(19m') cells grow normally and exhibit only minor defects in releasing Cdc14 during anaphase. However, net1(19m') cells exhibit a synergistic growth defect when combined with mutations in CDC5 or DBF2 (another MEN gene). CONCLUSIONS: We propose that although Cdc5 potentially disassembles RENT by directly phosphorylating Net1, Cdc5 mediates exit from mitosis primarily by phosphorylating other targets. Our study suggests that Cdc5/Polo is unusually promiscuous and highlights the need to validate Cdc5/Polo in vitro phosphorylation sites by direct in vivo mapping experiments

    Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models

    Full text link
    Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct.Comment: 16 pages, 11 figures, page 12-16 as appendi
    • …
    corecore