108 research outputs found
Quantitative analysis of the cytokine-mediated apoptosis-survival cell decision process
Thesis (Ph. D.)--Massachusetts Institute of Technology, Biological Engineering Division, 2005.Includes bibliographical references (p. 119-134).How do cells sense their environment and decide whether to live or to die? This question has drawn considerable interest since 1972, when it was first discovered that cells have an intrinsic ability to self-destruct through a process called apoptosis. Since then, apoptosis has been shown to play a critical role in both normal physiology and disease. In addition, many of the basic molecular mechanisms that control apoptosis have been revealed. Yet despite the known list of interactions and regulators, it remains difficult to inspect the network of apoptosis-related proteins and predict how cells will behave. The challenge is even greater when one considers interactions with other networks that are anti-apoptotic, such as growth-factor networks. In this thesis, we develop an approach to measure, analyze, and predict how complex intracellular signaling networks transduce extracellular stimuli into cellular fates. This approach entails three interrelated aims: 1) to develop high-throughput, quantitative techniques that measure key nodes in the intracellular network; 2) to characterize the quantitative changes in network state and cell behavior by exposing cells to diverse fate-changing stimuli; and 3) to use data-driven modeling approaches that analyze large signaling-response datasets to suggest new biological hypotheses.(cont.) These aims were focused on an apoptosis-survival cell-fate decision process controlled by one prodeath cytokine, tumor necrosis factor (TNF), and two prosurvival stimuli, epidermal growth factor (EGF) and insulin. We first developed radioactive- and fluorescence-based high-throughput assays for quantifying activity changes in the kinases that catalyze key phosphorylation events downstream of TNF, EGF, and insulin. By combining these assays with techniques measuring other important posttranslational modifications, we then compiled over 7000 individual protein measurements of the cytokine-induced network. The signaling measurements were combined with over 1400 measurements of apoptotic responses by using partial least squares (PLS) regression approaches. These signaling-apoptosis regression models predicted apoptotic responses from cytokine-induced signaling patterns alone. Furthermore, the models helped to reveal the importance of previously unrecognized autocrine cytokines in controlling cell fate. This thesis has therefore shown how cell decisions, like apoptosis-versus-survival, can be understood and predicted from the quantitative information contained in the upstream signaling network.by Kevin A. Janes.Ph.D
Cytokine-Induced Signaling Networks Prioritize Dynamic Range over Signal Strength
SummarySignaling networks respond to diverse stimuli, but how the state of the signaling network is relayed to downstream cellular responses is unclear. We modeled how incremental activation of signaling molecules is transmitted to control apoptosis as a function of signal strength and dynamic range. A linear relationship between signal input and response output, with the dynamic range of signaling molecules uniformly distributed across activation states, most accurately predicted cellular responses. When nonlinearized signals with compressed dynamic range relay network activation to apoptosis, we observe catastrophic, stimulus-specific prediction failures. We develop a general computational technique, “model-breakpoint analysis,” to analyze the mechanism of these failures, identifying new time- and stimulus-specific roles for Akt, ERK, and MK2 kinase activity in apoptosis, which were experimentally verified. Dynamic range is rarely measured in signal-transduction studies, but our experiments using model-breakpoint analysis suggest it may be a greater determinant of cell fate than measured signal strength
A Gyrochronology and Microvariability Survey of the Milky Way's Older Stars Using Kepler's Two-Wheels Program
Even with the diminished precision possible with only two reaction wheels,
the Kepler spacecraft can obtain mmag level, time-resolved photometry of tens
of thousands of sources. The presence of such a rich, large data set could be
transformative for stellar astronomy. In this white paper, we discuss how
rotation periods for a large ensemble of single and binary main- sequence
dwarfs can yield a quantitative understanding of the evolution of stellar
spin-down over time. This will allow us to calibrate rotation-based ages beyond
~1 Gyr, which is the oldest benchmark that exists today apart from the Sun.
Measurement of rotation periods of M dwarfs past the fully-convective boundary
will enable extension of gyrochronology to the end of the stellar
main-sequence, yielding precise ages ({\sigma} ~10%) for the vast majority of
nearby stars. It will also help set constraints on the angular momentum
evolution and magnetic field generation in these stars. Our Kepler-based study
would be supported by a suite of ongoing and future ground-based observations.
Finally, we briefly discuss two ancillary science cases, detection of
long-period low-mass eclipsing binaries and microvariability in white dwarfs
and hot subdwarf B stars that the Kepler Two-Wheels Program would facilitate.Comment: Kepler white pape
TNF-insulin crosstalk at the transcription factor GATA6 is revealed by a model that links signaling and transcriptomic data tensors
Signal -transduction networks coordinate transcriptional programs activated by diverse extracellular stimuli, such as growth factors and cytokines. Cells receive multiple stimuli simultaneously, and mapping how activation of the integrated signaling network affects gene expression is a challenge. We stimulated colon adenocarcinoma cells with various combinations of the cytokine tumor necrosis factor (TNF) and the growth factors insulin and epidermal growth factor (EGF) to investigate signal integration and transcriptional crosstalk. We quantitatively linked the proteomic and transcriptomic data sets by implementing a structured computational approach called tensor partial least squares regression. This statistical model accurately predicted transcriptional signatures from signaling arising from single and combined stimuli and also predicted time-dependent contributions of signaling events. Specifically, the model predicted that an early-phase, Akt-associated signal downstream of insulin repressed a set of transcripts induced by TNF. Through bioinformatics and cell-based experiments, we identified the Akt-repressed signal as glycogen synthase kinase 3 (GSK3)–catalyzed phosphorylation of Ser37 on the long form of the transcription factor GATA6. Phosphorylation of GATA6 on Ser37 promoted its degradation, thereby preventing GATA6 from repressing transcripts that are induced by TNF and attenuated by insulin. Our analysis showed that predictive tensor modeling of proteomic and transcriptomic data sets can uncover pathway crosstalk that produces specific patterns of gene expression in cells receiving multiple stimuli
A biological approach to computational models of proteomic networks.
Computational modeling is useful as a means to assemble and test what we know about proteins and networks. Models can help address key questions about the measurement, definition and function of proteomic networks. Here, we place these biological questions at the forefront in reviewing the computational strategies that are available to analyze proteomic networks. Recent examples illustrate how models can extract more information from proteomic data, test possible interactions between network proteins and link networks to cellular behavior. No single model can achieve all these goals, however, which is why it is critical to prioritize biological questions before specifying a particular modeling approach. Introduction Our current understanding of the proteins, interactions and pathways that comprise signaling networks is detailed, yet it remains incomplete. Recent experimental techniques for unraveling intricate signaling networks have become increasingly quantitative and multiplex. New approaches are now needed to compile the existing quantitative biological knowledge and to maximize the information extracted from large-scale signaling and proteomic datasets. Computational models formalize a complex biological or experimental process mathematically, which can be useful for assembling and analyzing quantitative data. Modeling is thus critical for fields such as proteomics, genomics and systems biology. As a discipline, biology thrives on clarity through consensus (take, for instance, the central dogma). To model biological networks, however, we and others have argued against a consensus 'one size fits all' philosophy, favoring instead a spectrum of computational techniques Anchoring model sophistication with experimental data Proteomics research is clearly directed at uncovering more biological detail within networks -new proteins, new interactions, new complexes How does increasing the level of model detail decrease believability? With model detail come parameters. In a model, these parameters might define a signaling protein's starting concentration, rate of turnover or diffusivity through the cytoplasm. Model parameters are frequently unknown and must therefore be estimated from data, which reduces believability. Importantly, the number of required parameters multiplies as more biological detail is added Current Opinion in Chemical Biology Network-measurement models Modeling has played an increasingly important role in the measurement of proteomes and networks. Measurement models are a useful way to condense different methodological considerations into a single quantitative description of an experiment In the field of 'global' proteomics, no other experimental method has had as significant an impact as mass spectrometry (MS) Network definition 1a, 1b, 2b, 3c Current Opinion in Chemical Biology Two distinct perspectives on computational models of proteomic networks. sence of proteins in a complex starting mixture [11], which used prior MS measurements as training data to fit an initial frequency distribution of peptides whose assignments were correct and incorrect. The initial distributions were used as prior information in a model that calculates the probability of correct peptide assignment given an MS spectrum. Running the model through all of the spectra in an MS experiment generates a new distribution of (probably) correct and (probably) incorrect peptides, which can update the prior information for the next iteration through the model. In this way, the model learns the most likely peptide assignments from the spectra itself, with initialization provided by a high-quality training dataset. The resulting peptide probabilities can then be fed into downstream models that calculate protein assignments from a set of likely peptides Measurement models are also useful for analyzing data quality itself. Often, quality is synonymous with information [14 ] selected for high-quality proteomic data by calculating the intersection of large-scale phenotypic, transcriptional and interaction datasets in Caenorhabditis elegans. Using the overlap among the measured networks, the Gunsalus et al. model was shown to be enriched in proteins sharing common biological functions. Gaudet et al. [15 ] used the predictive ability of a model to quantify network information content directly from a proteomic measurement set. A key conclusion from this work was the importance of measurement combinations. Different types of assays (kinase activity assays, quantitative western blotting, etc.) used over a range of time points were critical to accurately predicting the response of cells treated with multiple experimental stimuli. As quantitative MS-based experiments evolve Network-definition models An important goal for computational models is to define mathematically the proteins and pathways that constitute a signaling network. Modeling strategies for addressing the question of network definition can be subdivided into two categories: reconstruction models, which build networks from previously reported mechanisms; and inference models, which deduce network structure from large-scale datasets Model class Current Opinion in Chemical Biology What can be learned from these complex models founded on highly parameterized systems of differential equations Many biological networks lack the in-depth mechanistic understanding needed for a plausible network reconstruction. With these networks, inferential modeling approaches can be used to suggest connections between molecules Network-function models Proteomic networks are important because they ultimately control cellular functions. Diverse extracellular stimuli converge upon a common intracellular network, which can mediate an array of cellular responses [38 ] used decision-tree modeling (Box 1) to characterize cell migration based on the phosphorylation levels of five key intracellular proteins. The resulting 'branches' of the decision tree identified the sequence of conditional molecular statements that best predicted low, medium or high cellular speed -for instance, IF extracellular-regulated kinase phosphorylation is low AND IF myosin light-chain phosphorylation is high THEN migration speed is high. It would be interesting to use this approach in larger networks while constraining decision-tree branchpoints based on the approximate positions of molecules in the network (first membrane transducers, then initiators, then effectors, etc.). Prediction of cellular functions can also be achieved more quantitatively by training models on measurements of the upstream signaling network. We have used partial least squares modeling to predict 12 measured apoptotic responses from 19 time-dependent signaling profiles [39 ]. This particular modeling approach calculates the most informative combinations of signals that together predict cellular functions. The combinations of stress, prodeath and prosurvival signals identified by the model were consistent with known mechanisms but could not have been predicted by inspection. Focusing on intracellular signals with recognized but complex roles in cell death thus allowed the model to identify new mechanisms of apoptosis control within the currently understood network. Very recently, we have found that this approach to modeling network function could effectively capture cytokine-induced apoptotic responses that differ between diverse cell types (K Miller-Jensen, KA Janes, DA Lauffenburger, unpublished data). This suggests that different cell types might share a common network that converts signals into cellular responses. If true, then reconstruction models Conclusions The measurement, definition and function of proteomic networks must be addressed in such a way that models complement experiment. These networks remain too unconstrained to study by mathematics alone yet have become too complex to understand completely by intuition. We believe that insights here will come about by approaching models through biological questions. In line with this view, our review has focused on less detailed models with strong foundations in data Update Recent work has provided new examples of how existing reconstruction models can be further explored and refined to aid biological discovery. Cheong et al. [53 ] used an NF-kB signaling mode
Using a complete spectroscopic survey to find red quasars and test the KX method
We present an investigation of quasar colour-redshift parameter space in
order to search for radio-quiet red quasars and to test the ability of a
variant of the KX quasar selection method to detect quasars over a full range
of colour without bias. This is achieved by combining IRIS2 imaging with the
complete Fornax Cluster Spectroscopic Survey to probe parameter space
unavailable to other surveys. We construct a new sample of 69 quasars with
measured bJ - K colours. We show that the colour distribution of these quasars
is significantly different from that of the Large Bright Quasar Survey's
quasars at a 99.9% confidence level. We find 11 of our sample of 69 quasars
have signifcantly red colours (bJ - K >= 3.5) and from this, we estimate the
red quasar fraction of the K <= 18.4 quasar population to be 31%, and robustly
constrain it to be at least 22%. We show that the KX method variant used here
is more effective than the UVX selection method, and has less colour bias than
optical colour-colour selection methods.Comment: 11 pages, 14 figures, accepted for publication in MNRA
Dual expression and anatomy lines allow simultaneous visualization of gene expression and anatomy
Studying the developmental genetics of plant organs, requires following gene expression in specific tissues. To facilitate this, we have developed the Dual Expression Anatomy Lines (DEAL), which incorporate a red plasma membrane marker alongside a fluorescent reporter for a gene of interest in the same vector. Here, we adapted the GreenGate cloning vectors to create two destination vectors showing strong marking of cell membranes in either the whole root or specifically in the lateral roots. This system can also be used in both embryos and whole seedlings. As proof of concept, we follow both gene expression and anatomy in Arabidopsis (Arabidopsis thaliana) during lateral root organogenesis for a period of over 24h,. and cCoupled with the development of a flow cell and perfusion system, we follow changes in activity of the DII auxin sensor following application of auxin
Representation of genomic intratumor heterogeneity in multi-region non-small cell lung cancer patient-derived xenograft models
Patient-derived xenograft (PDX) models are widely used in cancer research. To investigate the genomic fidelity of non-small cell lung cancer PDX models, we established 48 PDX models from 22 patients enrolled in the TRACERx study. Multi-region tumor sampling increased successful PDX engraftment and most models were histologically similar to their parent tumor. Whole-exome sequencing enabled comparison of tumors and PDX models and we provide an adapted mouse reference genome for improved removal of NOD scid gamma (NSG) mouse-derived reads from sequencing data. PDX model establishment caused a genomic bottleneck, with models often representing a single tumor subclone. While distinct tumor subclones were represented in independent models from the same tumor, individual PDX models did not fully recapitulate intratumor heterogeneity. On-going genomic evolution in mice contributed modestly to the genomic distance between tumors and PDX models. Our study highlights the importance of considering primary tumor heterogeneity when using PDX models and emphasizes the benefit of comprehensive tumor sampling
- …