9,650 research outputs found
Cellular Decision Making by Non-Integrative Processing of TLR Inputs
Cells receive a multitude of signals from the environment, but how they process simultaneous signaling inputs is not well understood. Response to infection, for example, involves parallel activation of multiple Toll-like receptors (TLRs) that converge on the nuclear factor ÎșB (NF-ÎșB) pathway. Although we increasingly understand inflammatory responses for isolated signals, it is not clear how cells process multiple signals that co-occur in physiological settings. We therefore examined a bacterial infection scenario involving co-stimulation of TLR4 and TLR2. Independent stimulation of these receptors induced distinct NF-ÎșB dynamic profiles, although surprisingly, under co-stimulation, single cells continued to show ligand-specific dynamic responses characteristic of TLR2 or TLR4 signaling rather than a mixed response, comprising a cellular decision that we term ânon-integrativeâ processing. Iterating modeling and microfluidic experiments revealed that non-integrative processing occurred through interaction of switch-like NF-ÎșB activation, receptor-specific processing timescales, cell-to-cell variability, and TLR cross-tolerance mediated by multilayer negative feedback
Search algorithms as a framework for the optimization of drug combinations
Combination therapies are often needed for effective clinical outcomes in the
management of complex diseases, but presently they are generally based on
empirical clinical experience. Here we suggest a novel application of search
algorithms, originally developed for digital communication, modified to
optimize combinations of therapeutic interventions. In biological experiments
measuring the restoration of the decline with age in heart function and
exercise capacity in Drosophila melanogaster, we found that search algorithms
correctly identified optimal combinations of four drugs with only one third of
the tests performed in a fully factorial search. In experiments identifying
combinations of three doses of up to six drugs for selective killing of human
cancer cells, search algorithms resulted in a highly significant enrichment of
selective combinations compared with random searches. In simulations using a
network model of cell death, we found that the search algorithms identified the
optimal combinations of 6-9 interventions in 80-90% of tests, compared with
15-30% for an equivalent random search. These findings suggest that modified
search algorithms from information theory have the potential to enhance the
discovery of novel therapeutic drug combinations. This report also helps to
frame a biomedical problem that will benefit from an interdisciplinary effort
and suggests a general strategy for its solution.Comment: 36 pages, 10 figures, revised versio
Observational-Interventional Priors for Dose-Response Learning
Controlled interventions provide the most direct source of information for
learning causal effects. In particular, a dose-response curve can be learned by
varying the treatment level and observing the corresponding outcomes. However,
interventions can be expensive and time-consuming. Observational data, where
the treatment is not controlled by a known mechanism, is sometimes available.
Under some strong assumptions, observational data allows for the estimation of
dose-response curves. Estimating such curves nonparametrically is hard: sample
sizes for controlled interventions may be small, while in the observational
case a large number of measured confounders may need to be marginalized. In
this paper, we introduce a hierarchical Gaussian process prior that constructs
a distribution over the dose-response curve by learning from observational
data, and reshapes the distribution with a nonparametric affine transform
learned from controlled interventions. This function composition from different
sources is shown to speed-up learning, which we demonstrate with a thorough
sensitivity analysis and an application to modeling the effect of therapy on
cognitive skills of premature infants
Incorporating Deep Learning Techniques into Outcome Modeling in Non-Small Cell Lung Cancer Patients after Radiation Therapy
Radiation therapy (radiotherapy) together with surgery, chemotherapy, and immunotherapy are common modalities in cancer treatment. In radiotherapy, patients are given high doses of ionizing radiation which is aimed at killing cancer cells and shrinking tumors. Conventional radiotherapy usually gives a standard prescription to all the patients, however, as patients are likely to have heterogeneous responses to the treatment due to multiple prognostic factors, personalization of radiotherapy treatment is desirable. Outcome models can serve as clinical decision-making support tools in the personalized treatment, helping evaluate patientsâ treatment options before the treatment or during fractionated treatment. It can further provide insights into designing of new clinical protocols. In the outcome modeling, two indices including tumor control probability (TCP) and normal tissue complication probability (NTCP) are usually investigated.
Current outcome models, e.g., analytical models and data-driven models, either fail to take into account complex interactions between physical and biological variables or require complicated feature selection procedures. Therefore, in our studies, deep learning (DL) techniques are incorporated into outcome modeling for prediction of local control (LC), which is TCP in our case, and radiation pneumonitis (RP), which is NTCP in our case, in non-small-cell lung cancer (NSCLC) patients after radiotherapy. These techniques can improve the prediction performance of outcomes and simplify model development procedures. Additionally, longitudinal data association, actuarial prediction, and multi-endpoints prediction are considered in our models. These were carried out in 3 consecutive studies.
In the first study, a composite architecture consisting of variational auto-encoder (VAE) and multi-layer perceptron (MLP) was investigated and applied to RP prediction. The architecture enabled the simultaneous dimensionality reduction and prediction. The novel VAE-MLP joint architecture with area under receiver operative characteristics (ROC) curve (AUC) [95% CIs] 0.781 [0.737-0.808] outperformed a strategy which involves separate VAEs and classifiers (AUC 0.624 [ 0.577-0.658]).
In the second study, composite architectures consisted of 1D convolutional layer/ locally-connected layer and MLP that took into account longitudinal associations were applied to predict LC. Composite architectures convolutional neural network (CNN)-MLP that can model both longitudinal and non-longitudinal data yielded an AUC 0.832 [ 0.807-0.841]. While plain MLP only yielded an AUC 0.785 [CI: 0.752-0.792] in LC control prediction.
In the third study, rather than binary classification, time-to-event information was also incorporated for actuarial prediction. DL architectures ADNN-DVH which consider dosimetric information, ADNN-com which further combined biological and imaging data, and ADNN-com-joint which realized multi-endpoints prediction were investigated. Analytical models were also conducted for comparison purposes. Among all the models, ADNN-com-joint performed the best, yielding c-indexes of 0.705 [0.676-0.734] for RP2, 0.740 [0.714-0.765] for LC and an AU-FROC 0.720 [0.671-0.801] for joint prediction. The performance of proposed models was also tested on a cohort of newly-treated patients and multi-institutional RTOG0617 datasets.
These studies taken together indicate that DL techniques can be utilized to improve the performance of outcome models and potentially provide guidance to physicians during decision making. Specifically, a VAE-MLP joint architectures can realize simultaneous dimensionality reduction and prediction, boosting the performance of conventional outcome models. A 1D CNN-MLP joint architecture can utilize temporal-associated variables generated during the span of radiotherapy. A DL model ADNN-com-joint can realize multi-endpoint prediction, which allows considering competing risk factors. All of those contribute to a step toward enabling outcome models as real clinical decision support tools.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162923/1/sunan_1.pd
A new diagnostic algorithm for Burkitt and diffuse large B-cell lymphomas based on the expression of CSE1L and STAT3 and on MYC rearrangement predicts outcome
Background Aggressive mature B-cell non-Hodgkin's lymphomas (BCL) sharing features of Burkitt's lymphoma (BL) and diffuse large B-cell lymphoma (DLBCL) (intermediate BL/DLBCL) but deviating with respect to one or more characteristics are increasingly recognized. The limited knowledge about these biologically heterogeneous lymphomas hampers their assignment to a known entity, raising incertitude about optimal treatment approaches. We therefore searched for discriminative, prognostic, and predictive factors for their better characterization. Patients and methods We analyzed 242 cytogenetically defined aggressive mature BCL for differential protein expression. Marker selection was based on recent gene-expression profile studies. Predictive models for diagnosis were established and validated by a different set of lymphomas. Results CSE1L- and inhibitor of DNA binding-3 (ID3)-overexpression was associated with the diagnosis of BL and signal transduction and transcription-3 (STAT3) with DLBCL (P<0.001 for all markers). All three markers were associated with patient outcome in DLBCL. A new algorithm discriminating BL from DLBCL emerged, including the expression of CSE1L, STAT3, and MYC translocation. This ânew classifier' enabled the identification of patients with intermediate BL/DLBCL who benefited from intensive chemotherapy regimens. Conclusion The proposed algorithm, which is based on markers with reliable staining properties for routine diagnostics, represents a novel valid tool in separating BL from DLBCL. Most interestingly, it allows segregating intermediate BL/DLBCL into groups with different treatment requirement
Cancer Drug Screening Scale-up: Combining Biomimetic Microfluidic Platforms and Deep Learning Image Analysis
The development of cancer drugs is usually costly and time-consuming, mainly due to growing complexity in screening large number of candidate compounds and high failure rates in translation from preclinical trials to clinical approval. Despite the great efforts, the preclinical screening platforms combing good clinical relevance and high throughput for large-scale drug testing is still lacking. In addition, accumulating evidence suggests that cancer drug response can be altered by tumor microenvironment (TME), which includes not only cancer cells but also physical, and biochemical cues in niches. To improve the current cancer drug screening assays, it is important to mimic local TME to achieve better physiological relevance. In the first part of this dissertation, three TME-mimicking microfluidic platforms were introduced for three different in-vitro TME-mimicking tumor sphere models: spheres in matrix, self-aggregated spheres, and single-cell clonal spheres. First, a 3D gel-island chip investigated the heterogeneity of single-cell drug responses in biomimetic extracellular matrix (ECM). With 1,500 isolated single cell chambers containing ECM, it was demonstrated that ECM support was favorable for some population of cancer cells to maintain stemness and develop drug resistance. This result suggested the importance of drug screening at single-cell resolution in TME-mimicking platforms. Secondly, a drug combination screening chip enabling high-throughput and scalable combinatorial drug screening was demonstrated for the aggregated sphere model. Instead of screening a single drug on each of the tumors, this chip allows the screening of all pairwise drug combinations from eight different cancer drugs, in total 172 different treatment conditions, and 1,032 tested samples in a single microfluidic chip. The presented design approach was easily scalable to incorporate arbitrary number of drugs for large-scale drug screening. Finally, single-cell Hi-Sphere chip enabled high-throughput clonal sphere culture and selective retrieval. Combining fluorescent dye on-situ staining techniques, we identified rare cancer stem-like cell population and confirms its location at the leading edge of spheres.
Advance in experimental throughput generates massive data, which demands the corresponding automatic analysis and intelligent interpretation capabilities. The second part of this dissertation focuses on the applications of computer vision and machine learning algorithms to automated biomedical data processing. Image analysis with convolutional neural network was applied for drug efficacy evaluation in a fast and label-free manner. The estimated drug efficacy is highly correlated with the experimental ground truth (R-value > 0.93), while the predicted half-maximal inhibitory concentration is within 8% error range. In addition, metastatic fast-moving cells could be identified after extracting morphological features from the microscope images and applying deep learning algorithm for image analysis, achieving over 99% accuracy for cell movement direction prediction and 91% for speed prediction. In summary, this dissertation presents high-throughput TME-mimicking microfluidics and deep learning image analysis for large-scale drug screening solutions.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163039/1/zhangzx_1.pd
- âŠ