19,178 research outputs found

    Cell fate reprogramming by control of intracellular network dynamics

    Full text link
    Identifying control strategies for biological networks is paramount for practical applications that involve reprogramming a cell's fate, such as disease therapeutics and stem cell reprogramming. Here we develop a novel network control framework that integrates the structural and functional information available for intracellular networks to predict control targets. Formulated in a logical dynamic scheme, our approach drives any initial state to the target state with 100% effectiveness and needs to be applied only transiently for the network to reach and stay in the desired state. We illustrate our method's potential to find intervention targets for cancer treatment and cell differentiation by applying it to a leukemia signaling network and to the network controlling the differentiation of helper T cells. We find that the predicted control targets are effective in a broad dynamic framework. Moreover, several of the predicted interventions are supported by experiments.Comment: 61 pages (main text, 15 pages; supporting information, 46 pages) and 12 figures (main text, 6 figures; supporting information, 6 figures). In revie

    Mathematical model of a telomerase transcriptional regulatory network developed by cell-based screening: analysis of inhibitor effects and telomerase expression mechanisms

    Get PDF
    Cancer cells depend on transcription of telomerase reverse transcriptase (TERT). Many transcription factors affect TERT, though regulation occurs in context of a broader network. Network effects on telomerase regulation have not been investigated, though deeper understanding of TERT transcription requires a systems view. However, control over individual interactions in complex networks is not easily achievable. Mathematical modelling provides an attractive approach for analysis of complex systems and some models may prove useful in systems pharmacology approaches to drug discovery. In this report, we used transfection screening to test interactions among 14 TERT regulatory transcription factors and their respective promoters in ovarian cancer cells. The results were used to generate a network model of TERT transcription and to implement a dynamic Boolean model whose steady states were analysed. Modelled effects of signal transduction inhibitors successfully predicted TERT repression by Src-family inhibitor SU6656 and lack of repression by ERK inhibitor FR180204, results confirmed by RT-QPCR analysis of endogenous TERT expression in treated cells. Modelled effects of GSK3 inhibitor 6-bromoindirubin-3′-oxime (BIO) predicted unstable TERT repression dependent on noise and expression of JUN, corresponding with observations from a previous study. MYC expression is critical in TERT activation in the model, consistent with its well known function in endogenous TERT regulation. Loss of MYC caused complete TERT suppression in our model, substantially rescued only by co-suppression of AR. Interestingly expression was easily rescued under modelled Ets-factor gain of function, as occurs in TERT promoter mutation. RNAi targeting AR, JUN, MXD1, SP3, or TP53, showed that AR suppression does rescue endogenous TERT expression following MYC knockdown in these cells and SP3 or TP53 siRNA also cause partial recovery. The model therefore successfully predicted several aspects of TERT regulation including previously unknown mechanisms. An extrapolation suggests that a dominant stimulatory system may programme TERT for transcriptional stability

    CodNN -- Robust Neural Networks From Coded Classification

    Get PDF
    Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet their intrinsic properties remain a mystery. In particular, it is widely known that DNNs are highly sensitive to noise, whether adversarial or random. This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving. In this paper we construct robust DNNs via error correcting codes. By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed. Since DNNs can be seen as a layered concatenation of classification tasks, our research begins with the core task of classifying noisy coded inputs, and progresses towards robust DNNs. We focus on binary data and linear codes. Our main result is that the prevalent parity code can guarantee robustness for a large family of DNNs, which includes the recently popularized binarized neural networks. Further, we show that the coded classification problem has a deep connection to Fourier analysis of Boolean functions. In contrast to existing solutions in the literature, our results do not rely on altering the training process of the DNN, and provide mathematically rigorous guarantees rather than experimental evidence.Comment: To appear in ISIT '2

    Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

    Full text link
    We study maximal identifiability, a measure recently introduced in Boolean Network Tomography to characterize networks' capability to localize failure nodes in end-to-end path measurements. We prove tight upper and lower bounds on the maximal identifiability of failure nodes for specific classes of network topologies, such as trees and dd-dimensional grids, in both directed and undirected cases. We prove that directed dd-dimensional grids with support nn have maximal identifiability dd using 2d(n1)+22d(n-1)+2 monitors; and in the undirected case we show that 2d2d monitors suffice to get identifiability of d1d-1. We then study identifiability under embeddings: we establish relations between maximal identifiability, embeddability and graph dimension when network topologies are model as DAGs. Our results suggest the design of networks over NN nodes with maximal identifiability Ω(logN)\Omega(\log N) using O(logN)O(\log N) monitors and a heuristic to boost maximal identifiability on a given network by simulating dd-dimensional grids. We provide positive evidence of this heuristic through data extracted by exact computation of maximal identifiability on examples of small real networks

    CodNN – Robust Neural Networks From Coded Classification

    Get PDF
    Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet their intrinsic properties remain a mystery. In particular, it is widely known that DNNs are highly sensitive to noise, whether adversarial or random. This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.In this paper we construct robust DNNs via error correcting codes. By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed. Since DNNs can be seen as a layered concatenation of classification tasks, our research begins with the core task of classifying noisy coded inputs, and progresses towards robust DNNs.We focus on binary data and linear codes. Our main result is that the prevalent parity code can guarantee robustness for a large family of DNNs, which includes the recently popularized binarized neural networks. Further, we show that the coded classification problem has a deep connection to Fourier analysis of Boolean functions.In contrast to existing solutions in the literature, our results do not rely on altering the training process of the DNN, and provide mathematically rigorous guarantees rather than experimental evidence
    corecore