4,384 research outputs found
Recommended from our members
A co-translational ubiquitination pathway for quality control of newly synthesized proteins
textPrevious studies indicated that 6%-30% of newly synthesized proteins are rapidly degraded by the ubiquitin-proteasome system. This has generally been assumed to occur post-translationally, following failure of chaperone-assisted folding mechanisms. However, the extent and significance of co-translational quality control remains largely unknown. In investigations of ISG15, an interferon-induced ubiquitin-like protein, our lab found that ISG15 is conjugated to a very broad spectrum of newly synthesized proteins. The major ligase for ISG15, Herc5, co-fractionated with polysomes, and further studies indicated that the processes of translation and ISGylation were closely coupled. Here, I employ an in vitro run-off translation system and puromycin labeling experiments to demonstrate that nascent polypeptides are ISGylated within active translation complexes, providing direct support for the co-translational mechanism for ISG15 conjugation. Approaches developed for studying co-translational ISGylation were subsequently used to examine co-translational ubiquitination (CTU), which we hypothesized might be important in quality control of newly synthesized proteins. Consistent with this, I found that the pathway for degradation of newly synthesized proteins was initiated while proteins were being translated, with ubiquitination of actively translating nascent polypeptides. CTU is a conserved and robust pathway from yeast to mammals, with 5-6% of total nascent polypeptides being ubiquitinated in S. cerevisiae, and 12-15% in human cells. CTU products contained primarily K48-linked polyubiquitin chains, consistent with a proteasomal targeting function. Although nascent chains previously have been shown to be ubiquitinated within stalled and defective translation complexes (referred to here as CTU [superscript S]), nascent chain ubiquitination also occurred within active translation complexes (CTU [superscript A]). CTU [superscript A] accounted for approximately two-thirds of total CTU (CTU[superscript T]) in human cells and approximately half of CTU[superscript T] in yeast cells. CTU[superscript A] was increased in response to agents that induce protein misfolding, whereas CTU[superscript S] was increased in response to agents that led to translational misreading or stalling. These results indicate that ubiquitination of nascent chains occurs in two contexts and define CTU[superscript A] as a component of a quality control system that marks proteins for destruction before their synthesis is complete. Finally, decreased translation fidelity is thought to lead to the accumulation of misfolded proteins and hasten the aging process. As CTU is a pathway for quality control of newly synthesized proteins, I explored whether CTU plays a protective role during the replicative aging process in budding yeast. Consistent with previous reports using human cells, I found that newly synthesized proteins are a major source of proteasome substrates under non-stressed conditions. Transient proteasome inhibition (using MG132) led to a decrease of yeast replicative life span (RLS), whereas simultaneous treatment with cycloheximide, a translation inhibitor, suppressed this effect. Deletion of Ltn1, the major E3 ligase of the CTU[superscript S] pathway, also shortened the RLS of yeast. Together, these results provide a preliminary set of evidence supporting the hypothesis that the quality of newly synthesized proteins is an important determinant of aging.Microbiolog
Recommended from our members
Essays on Causal Inference with Endogeneity and Missing Data
This dissertation strives to devise novel yet easy-to-implement estima- tion and inference procedures for economists to solve complicated real world problems. It provides by far the most optimal solutions in situations when sample selection is entangled with missing data problems and when treatment effects are heterogenous but instruments only have limited variations. In the first chapter, we investigate the problem of missing instruments and create the generated instrument approach to address it. Specifically, When the missingness of instruments is endogenous, dropping observations can cause biased estimation. This chapter proposes a methodology which uses all the data to do instrumental variables (IV) estimation. The methodology provides consistent estimation with endogenous missingness of instruments. It firstly forms a generated instrument for every observation in the data sample that: a) for observations without instruments, the new instrument is an imputation; b) for observations with instruments, the new instrument is an inverse propensity score weighted combination of the original instrument and an imputation. The estimation then proceeds by using the generated instruments. Asymptotic theorems are established. The new estimator attains the semiparametric efficiency bound. It is also less biased compared to existing procedures in the simulations. As an illustrative example, we use the NLSYM data set in which IQ scores are partially missing, and demonstrate that by adopting the new methodology the return to education is larger and more precisely estimated compared to standard complete case methods. In the second chapter, we provide Lasso-type of procedures for reduced form regression with many missing instruments. The methodology takes two steps. In the first step, we generate a rich instrument set from the many missing instruments and other observed data. In the second step, IV estimation is conduced based on the generated instrument set. Specifically, the (very) many generated instruments are used to approximate a âpseudoâ optimal instrument in the reduced form regression. The approach has been shown to have efficiency gains compared to the generated instrument estimator developed in the first chapter. We also compare the finite sample behavior of the new estimator with other Lasso estimator and demonstrate the good performance of the proposed estimator in the Monte Carlo experiments. The third chapter estimates individual treatment effects in a triangular model with binaryâvalued endogenous treatments. This chapter is based on the previous joint work with Quang Vuong and Haiqing Xu. Following the identification strategy established in (Vuong and Xu, forthcoming), we propose a two-stage estimation approach. First, we estimate the counterfactual outcome and hence the individual treatment effect (ITE) for every observational unit in the sample. Second, we estimate the density of individual treatment effects in the population. Our estimation method does not suffer from the ill-posed inverse problem associated with inverting a nonâlinear functional. Asymptotic properties of the proposed method are established. We study its finite sample properties in Monte Carlo experiments. We also illustrate our approach with an empirical application assessing the effects of 401(k) retirement programs on personal savings. Our results show that there exists a small but statistically significant proportion of individuals who experience negative effects, although the majority of ITEs is positive.Economic
Recommended from our members
Infrared nano-spectroscopy via molecular expansion force detection
Mid-infrared absorption spectroscopy in the âmolecular fingerprintâ region (λ = 2.5â15 ÎŒm) is widely used for in situ analysis of chemical and biological samples. Due to the diffraction limit, traditional far-field techniques such as Fourier-transform infrared spectroscopy cannot take sample spectra with nanometer spatial resolution. To conduct nanoscale infrared measurement, in photoexpansion nano-spectroscopy, an atomic force microscope cantilever is used as a light absorption detector, in the way that the cantilever is deflected proportionally by the localized sample heating and expansion induced by infrared pulses. Previous studies of this new opto-mechanical technique demonstrated its powerfulness and simplicity, but relied on using high-power laser pulses to produce detectable cantilever deflection signal and it was difficult to measure ultra-thin samples below ~100 nm. In addition, the spatial resolution, though improved, is limited by the thermal diffusion length inside samples.
This dissertation presents a set of experiments which have substantially improved photoexpansion nano-spectroscopy in terms of sensitivity and spatial resolution, and have explored other aspects of this technique. For the first time, high-quality photoexpansion spectra have been obtained from molecular monolayers using low-power infrared pulses from a tunable quantum cascade laser. The orders of magnitude improvement in sensitivity is due to the two methods we implemented: mechanical enhancement by the cantilever resonance, and optical enhancement by the metalized cantilever tip. The spatial resolution is also improved and only determined by the locally enhanced field below the tip. After that, the dissertation shows the spectral background signal, which comes from infrared absorption by the substrate and tip, can be suppressed using a second laser. We have also investigated the nonlinearity of tip-sample interaction, and are able to detect sample photoexpansion force at heterodyne frequency. In the last part of this dissertation, we use our technique to image local optical energy distribution and ohmic heat dissipation of the metal nanoantennas.Electrical and Computer Engineerin
Recommended from our members
Coherent phonon dynamics in semiconductors
Ultrafast pump-probe spectroscopy is a powerful experimental technique to study the light-matter interaction and ultrafast dynamics in solids. In many semiconductors, under ultrafast laser irradiation, phonons (quantized lattice vibrations) with both temporal and spatial coherence can be generated conveniently. When a stronger laser pulse excites coherent phonons that induce refractive index change, and thus the reflectivity change of the materials, the time-dependent phonon dynamics can be detected by a delayed probe pulse. The generation and detection of coherent phonons provide an opportunity to understand the fundamental physics between light and matter interaction, as well as a path to manipulate other physical processes, for applications such as sound amplification stimulated emission (SASER), phonon mode manipulation, ultrafast phase switching, superconductivity enhancement and manipulation of magnetismÂčâ»â”. This thesis presents a series of time-resolved studies of coherent phonons in three semiconductor systems, including bulk CdSe, BiâTeâ/SbâTeâ superlattice and GaAs/AlAs superlattice. In bulk CdSe, a material extensively studied for quantum dot photoelectronics, coherent phonons serve as the probe for the reversible ultrafast melting. In BiâTeâ/SbâTeâ superlattice, a material system used for thermoelectrics, the coherent thermal phonons are excited directly and are found to be selectively filtered in the superlattice structure compared with bulk materials. In GaAs/AlAs superlattice, a quantum well structure for photodetectors and lasers, a strong quantum coherent coupling among different phonon modes is observed. A similar coherent coupling between photons and phonons has been used to induce and enhance superconductivity [superscript 6,7] and mimic the magnetic fieldâž. However, direct observation of nonlinear phonon coupling is rare. Moreover, a novel technique based on surface plasmon resonance has been implemented into the pump-probe spectrometer to improve detection efficiencyMaterials Science and Engineerin
Recommended from our members
Program synthesis using statistical models and logical reasoning
Complex APIs in new frameworks (Spark, R, TensorFlow, etc) have imposed steep learning curves on everyone, especially for people with limited programming backgrounds. For instance, due to the messy nature of data in different application domains, data scientists spend close to 80% of their time in data wrangling tasks, which are considered to be the "janitor work" of data science. Similarly, software engineers spend hours or even days learning how to use APIs through official documentation or examples from online forums. Program synthesis has the potential to automate complex tasks that involve API usage by providing powerful search algorithms to look for executable programs that satisfy a given specification (input-output examples, partial programs, formal specs, etc). However, the biggest barrier to a practical synthesizer is the size of search space, which increases strikingly fast with the complexity of the programs and the size of the targeted APIs. To address the above issue, this dissertation focuses on developing algorithms that push the frontiers of program synthesis. First, we propose a type-directed graph reachability algorithm in SyPet, a synthesizer for assembling programs from complex APIs. Second, we show how to combine enumerative search with lightweight constraint-based deduction in Morpheus, a synthesizer for automating real-world data wrangling tasks from input-output examples. Finally, we generalize the previous approaches to develop a novel conflict-driven synthesis algorithm that can learn from past mistakes.Computer Science
Heavy fermions and two loop electroweak corrections to
Applying effective Lagrangian method and on-shell scheme, we analyze the
electroweak corrections to the rare decay from some
special two loop diagrams in which a closed heavy fermion loop is attached to
the virtual charged gauge bosons or Higgs. At the decoupling limit where the
virtual fermions in inner loop are much heavier than the electroweak scale, we
verify the final results satisfying the decoupling theorem explicitly when the
interactions among Higgs and heavy fermions do not contain the nondecoupling
couplings. Adopting the universal assumptions on the relevant couplings and
mass spectrum of new physics, we find that the relative corrections from those
two loop diagrams to the SM theoretical prediction on the branching ratio of
can reach 5% as the energy scale of new physics
GeV.Comment: 30 pages,4 figure
Evolution of Iron K Line Emission in the Black Hole Candidate GX 339-4
GX 339-4 was regularly monitored with RXTE during a period (in 1999) when its
X-ray flux decreased significantly (from 4.2 erg cm to 7.6 erg cms in the 3--20 keV band),
as the source settled into the ``off state''. Our spectral analysis revealed
the presence of a prominent iron K line in the observed spectrum of
the source for all observations. The line shows an interesting evolution: it is
centered at 6.4 keV when the measured flux is above 5
erg cm, but is shifted to 6.7 keV at lower fluxes. The
equivalent width of the line appears to increase significantly toward lower
fluxes, although it is likely to be sensitive to calibration uncertainties.
While the fluorescent emission of neutral or mildly ionized iron atoms in the
accretion disk can perhaps account for the 6.4 keV line, as is often invoked
for black hole candidates, it seems difficult to understand the 6.7 keV line
with this mechanism, because the disk should be less ionized at lower fluxes
(unless its density changes drastically). On the other hand, the 6.7 keV line
might be due to recombination cascade of hydrogen or helium like iron ions in
an optically thin, highly ionized plasma. We discuss the results in the context
of proposed accretion models.Comment: 18 pages, 2 figures, accepted for publication in the ApJ in v552n2p
May 10, 2001 issu
Emission of photon echoes in a strongly scattering medium
We observe the two- and three-pulse photon echo emission from a scattering
powder, obtained by grinding a Pr:YSiO rare earth doped single
crystal. We show that the collective emission is coherently constructed over
several grains. A well defined atomic coherence can therefore be created
between randomly placed particles. Observation of photon echo on powders as
opposed to bulk materials opens the way to faster material development. More
generally, time-domain resonant four-wave mixing offers an attractive approach
to investigate coherent propagation in scattering media
Bridging adaptive estimation and control with modern machine learning : a quorum sensing inspired algorithm for dynamic clustering
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-92).Quorum sensing is a decentralized biological process, by which a community of bacterial cells with no global awareness can coordinate their functional behaviors based only on local decision and cell-medium interaction. This thesis draws inspiration from quorum sensing to study the data clustering problem, in both the time-invariant and the time-varying cases. Borrowing ideas from both adaptive estimation and control, and modern machine learning, we propose an algorithm to estimate an "influence radius" for each cell that represents a single data, which is similar to a kernel tuning process in classical machine learning. Then we utilize the knowledge of local connectivity and neighborhood to cluster data into multiple colonies simultaneously. The entire process consists of two steps: first, the algorithm spots sparsely distributed "core cells" and determines for each cell its influence radius; then, associated "influence molecules" are secreted from the core cells and diffuse into the whole environment. The density distribution in the environment eventually determines the colony associated with each cell. We integrate the two steps into a dynamic process, which gives the algorithm flexibility for problems with time-varying data, such as dynamic grouping of swarms of robots. Finally, we demonstrate the algorithm on several applications, including benchmarks dataset testing, alleles information matching, and dynamic system grouping and identication. We hope our algorithm can shed light on the idea that biological inspiration can help design computational algorithms, as it provides a natural bond bridging adaptive estimation and control with modern machine learning.by Feng Tan.S.M
Driving segments analysis for energy and environmental impacts of worsening traffic
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 143-145).During the last two decades, traffic congestion in the U.S. has increased from 30% to 67% of peak period travel. Further, current research shows that measures taken within transportation systems, such as adding capacity, improving operations and managing demand, are not enough to keep congestion from growing worse. With the worsening traffic, the vehicle's fuel consumption and pollutant emissions will inevitably increase. As such, this thesis aims to quantitatively evaluate the energy and environmental impacts of worsening traffic on individual vehicles and the U.S. light-duty vehicle fleet, as well as to design feasible measures beyond transportation systems to offset theses impacts. The fuel consumption and emissions of different vehicle types under different driving situations provide the basis for analyzing the energy and environmental impacts of worsening traffic. This thesis defines the concept of "driving segments" to represent all possible driving situations which consist of vehicle speed, operation patterns and road types. For each vehicle type, its fuel consumption and emissions in different "driving segments" can be developed into a matrix by ADVISOR 2004, the vehicle simulation tool. Combining the "driving segments" vehicle performance matrices with the model for traffic congestion, the energy and environmental impacts of worsening traffic on individual vehicles can be examined.(cont.) Based on these impacts, this thesis compares the performance of different vehicle types for both today's and tomorrow's traffic situations. Meanwhile, the on-road fuel economy of each vehicle type has also been calculated to update EPA's fuel economy rating by taking worsening traffic into consideration. Combining the "driving segments" vehicle performance matrices with a set of models for fleet population, vehicle technology, driving behavior and traffic congestion, the energy and environmental impacts of worsening traffic on the U.S. light-duty vehicle fleet can also be examined. Through sensitivity analysis, this thesis investigates the effects of altering vehicle choice, developing vehicle technology and changing driving behavior on offsetting the fuel consumption and emissions of the U.S. light-duty vehicle fleet caused by worsening traffic through 2030. It is concluded that promoting the market share of advanced vehicle technologies (Hybrids mainly) is the most effective and most feasible method.by Wen Feng.S.M
- âŠ