47 research outputs found
The UBA domain of conjugating enzyme Ubc1/Ube2K facilitates assembly of K48/K63âbranched ubiquitin chains
The assembly of a specific polymeric ubiquitin chain on a target protein is a key event in the regulation of numerous cellular processes. Yet, the mechanisms that govern the selective synthesis of particular polyubiquitin signals remain enigmatic. The homologous ubiquitin-conjugating (E2) enzymes Ubc1 (budding yeast) and Ube2K (mammals) exclusively generate polyubiquitin linked through lysine 48 (K48). Uniquely among E2 enzymes, Ubc1 and Ube2K harbor a ubiquitin-binding UBA domain with unknown function. We found that this UBA domain preferentially interacts with ubiquitin chains linked through lysine 63 (K63). Based on structural modeling, in vitro ubiquitination experiments, and NMR studies, we propose that the UBA domain aligns Ubc1 with K63-linked polyubiquitin and facilitates the selective assembly of K48/K63-branched ubiquitin conjugates. Genetic and proteomics experiments link the activity of the UBA domain, and hence the formation of this unusual ubiquitin chain topology, to the maintenance of cellular proteostasis.Deutsche Forschungsgemeinschaft (DFG)
http://dx.doi.org/10.13039/501100001659MaxâPlanckâGesellschaft (MPG)
http://dx.doi.org/10.13039/501100004189Peer Reviewe
Linking disaster risk reduction, climate change, and the sustainable development goals
PURPOSE:
The purpose of this paper is to better link the parallel processes yielding international agreements on climate change, disaster risk reduction, and sustainable development.
DESIGN/METHODOLOGY/APPROACH:
This paper explores how the Paris Agreement for climate change relates to disaster risk reduction and sustainable development, demonstrating too much separation amongst the topics. A resolution is provided through placing climate change within wider disaster risk reduction and sustainable development contexts.
FINDINGS:
No reason exists for climate change to be separated from wider disaster risk reduction and sustainable development processes.
RESEARCH LIMITATIONS/IMPLICATIONS:
Based on the research, a conceptual approach for policy and practice is provided. Due to entrenched territory, the research approach is unlikely to be implemented.
ORIGINALITY/VALUE:
Using a scientific basis to propose an ending for the silos separating international processes for climate change, disaster risk reduction, and sustainable development
Natural proteome diversity links aneuploidy tolerance to protein turnover
Accessing the natural genetic diversity of species unveils hidden genetic traits, clarifies gene functions and allows the generalizability of laboratory findings to be assessed. One notable discovery made in natural isolates of Saccharomyces cerevisiae is that aneuploidyâan imbalance in chromosome copy numbersâis frequent1, 2 (around 20%), which seems to contradict the substantial fitness costs and transient nature of aneuploidy when it is engineered in the laboratory3â5. Here we generate a proteomic resource and merge it with genomic1 and transcriptomic6 data for 796 euploid and aneuploid natural isolates. We find that natural and lab-generated aneuploids differ specifically at the proteome. In lab-generated aneuploids, some proteinsâespecially subunits of protein complexesâshow reduced expression, but the overall protein levels correspond to the aneuploid gene dosage. By contrast, in natural isolates, more than 70% of proteins encoded on aneuploid chromosomes are dosage compensated, and average protein levels are shifted towards the euploid state chromosome-wide. At the molecular level, we detect an induction of structural components of the proteasome, increased levels of ubiquitination, and reveal an interdependency of protein turnover rates and attenuation. Our study thus highlights the role of protein turnover in mediating aneuploidy tolerance, and shows the utility of exploiting the natural diversity of species to attain generalizable molecular insights into complex biological processes
Cargo-specific recruitment in clathrin- and dynamin-independent endocytosis
Spatially controlled, cargo-specific endocytosis is essential for development, tissue homeostasis and cancer invasion. Unlike cargo-specific clathrin-mediated endocytosis, the clathrin- and dynamin-independent endocytic pathway (CLIC-GEEC, CG pathway) is considered a bulk internalization route for the fluid phase, glycosylated membrane proteins and lipids. While the core molecular players of CG-endocytosis have been recently defined, evidence of cargo-specific adaptors or selective uptake of proteins for the pathway are lacking. Here we identify the actin-binding protein Swiprosin-1 (Swip1, EFHD2) as a cargo-specific adaptor for CG-endocytosis. Swip1 couples active Rab21-associated integrins with key components of the CG-endocytic machinery-Arf1, IRSp53 and actin-and is critical for integrin endocytosis. Through this function, Swip1 supports integrin-dependent cancer-cell migration and invasion, and is a negative prognostic marker in breast cancer. Our results demonstrate a previously unknown cargo selectivity for the CG pathway and a role for specific adaptors in recruitment into this endocytic route.Moreno-Layseca et al. identify Swip1 as an integrin-specific endocytic adaptor controlling the dynamics of integrin adhesion complexes as well as the migration and invasion of breast cancer cells
Kinetic analysis of protein stability reveals age-dependent degradation
Do young and old protein molecules have the same probability to be degraded? We addressed this question using metabolic pulse-chase labeling and quantitative mass spectrometry to obtain degradation profiles for thousands of proteins. We find that gt;10 of proteins are degraded non-exponentially. Specifically, proteins are less stable in the first few hours of their life and stabilize with age. Degradation profiles are conserved and similar in two cell types. Many non-exponentially degraded (NED) proteins are subunits of complexes that are produced in super-stoichiometric amounts relative to their exponentially degraded (ED) counterparts. Within complexes, \NED\} proteins have larger interaction interfaces and assemble earlier than \{ED\} subunits. Amplifying genes encoding \{NED\ proteins increases their initial degradation. Consistently, decay profiles can predict protein level attenuation in aneuploid cells. Together, our data show that non-exponential degradation is common, conserved, and has important consequences for complex formation and regulation of protein abundance
Proteomics Wants cRacker: Automated Standardized Data Analysis of LCâMS Derived Proteomic Data
The large-scale analysis of thousands of proteins under
various
experimental conditions or in mutant lines has gained more and more
importance in hypothesis-driven scientific research and systems biology
in the past years. Quantitative analysis by large scale proteomics
using modern mass spectrometry usually results in long lists of peptide
ion intensities. The main interest for most researchers, however,
is to draw conclusions on the protein level. Postprocessing and combining
peptide intensities of a proteomic data set requires expert knowledge,
and the often repetitive and standardized manual calculations can
be time-consuming. The analysis of complex samples can result in very
large data sets (lists with several 1000s to 100 000 entries of different
peptides) that cannot easily be analyzed using standard spreadsheet
programs. To improve speed and consistency of the data analysis of
LCâMS derived proteomic data, we developed cRacker. cRacker
is an R-based program for automated downstream proteomic data analysis
including data normalization strategies for metabolic labeling and
label free quantitation. In addition, cRacker includes basic statistical
analysis, such as clustering of data, or ANOVA and <i>t</i> tests for comparison between treatments. Results are presented in
editable graphic formats and in list files
Proteomics Wants cRacker: Automated Standardized Data Analysis of LCâMS Derived Proteomic Data
The large-scale analysis of thousands of proteins under
various
experimental conditions or in mutant lines has gained more and more
importance in hypothesis-driven scientific research and systems biology
in the past years. Quantitative analysis by large scale proteomics
using modern mass spectrometry usually results in long lists of peptide
ion intensities. The main interest for most researchers, however,
is to draw conclusions on the protein level. Postprocessing and combining
peptide intensities of a proteomic data set requires expert knowledge,
and the often repetitive and standardized manual calculations can
be time-consuming. The analysis of complex samples can result in very
large data sets (lists with several 1000s to 100 000 entries of different
peptides) that cannot easily be analyzed using standard spreadsheet
programs. To improve speed and consistency of the data analysis of
LCâMS derived proteomic data, we developed cRacker. cRacker
is an R-based program for automated downstream proteomic data analysis
including data normalization strategies for metabolic labeling and
label free quantitation. In addition, cRacker includes basic statistical
analysis, such as clustering of data, or ANOVA and <i>t</i> tests for comparison between treatments. Results are presented in
editable graphic formats and in list files
Proteomics Wants cRacker: Automated Standardized Data Analysis of LCâMS Derived Proteomic Data
The large-scale analysis of thousands of proteins under
various
experimental conditions or in mutant lines has gained more and more
importance in hypothesis-driven scientific research and systems biology
in the past years. Quantitative analysis by large scale proteomics
using modern mass spectrometry usually results in long lists of peptide
ion intensities. The main interest for most researchers, however,
is to draw conclusions on the protein level. Postprocessing and combining
peptide intensities of a proteomic data set requires expert knowledge,
and the often repetitive and standardized manual calculations can
be time-consuming. The analysis of complex samples can result in very
large data sets (lists with several 1000s to 100 000 entries of different
peptides) that cannot easily be analyzed using standard spreadsheet
programs. To improve speed and consistency of the data analysis of
LCâMS derived proteomic data, we developed cRacker. cRacker
is an R-based program for automated downstream proteomic data analysis
including data normalization strategies for metabolic labeling and
label free quantitation. In addition, cRacker includes basic statistical
analysis, such as clustering of data, or ANOVA and <i>t</i> tests for comparison between treatments. Results are presented in
editable graphic formats and in list files