263,123 research outputs found

    Towards an exact reconstruction of a time-invariant model from time series data

    Get PDF
    Dynamic processes in biological systems may be profiled by measuring system properties over time. One way of representing such time series data is through weighted interaction networks, where the nodes in the network represent the measurables and the weighted edges represent interactions between any pair of nodes. Construction of these network models from time series data may involve seeking a robust data-consistent and time-invariant model to approximate and describe system dynamics. Many problems in mathematics, systems biology and physics can be recast into this form and may require finding the most consistent solution to a set of first order differential equations. This is especially challenging in cases where the number of data points is less than or equal to the number of measurables. We present a novel computational method for network reconstruction with limited time series data. To test our method, we use artificial time series data generated from known network models. We then attempt to reconstruct the original network from the time series data alone. We find good agreement between the original and predicted networks

    Improvements in the reconstruction of time-varying gene regulatory networks: dynamic programming and regularization by information sharing among genes

    Get PDF
    <b>Method:</b> Dynamic Bayesian networks (DBNs) have been applied widely to reconstruct the structure of regulatory processes from time series data, and they have established themselves as a standard modelling tool in computational systems biology. The conventional approach is based on the assumption of a homogeneous Markov chain, and many recent research efforts have focused on relaxing this restriction. An approach that enjoys particular popularity is based on a combination of a DBN with a multiple changepoint process, and the application of a Bayesian inference scheme via reversible jump Markov chain Monte Carlo (RJMCMC). In the present article, we expand this approach in two ways. First, we show that a dynamic programming scheme allows the changepoints to be sampled from the correct conditional distribution, which results in improved convergence over RJMCMC. Second, we introduce a novel Bayesian clustering and information sharing scheme among nodes, which provides a mechanism for automatic model complexity tuning. <b>Results:</b> We evaluate the dynamic programming scheme on expression time series for Arabidopsis thaliana genes involved in circadian regulation. In a simulation study we demonstrate that the regularization scheme improves the network reconstruction accuracy over that obtained with recently proposed inhomogeneous DBNs. For gene expression profiles from a synthetically designed Saccharomyces cerevisiae strain under switching carbon metabolism we show that the combination of both: dynamic programming and regularization yields an inference procedure that outperforms two alternative established network reconstruction methods from the biology literature

    Learning Regularization Parameter-Maps for Variational Image Reconstruction using Deep Neural Networks and Algorithm Unrolling

    Get PDF
    We introduce a method for fast estimation of data-adapted, spatio-temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV)-minimization. Our approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs), and relies on two distinct sub-networks. The first sub-network estimates the regularization parameter-map from the input data. The second sub-network unrolls T iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is trained end-to-end in a supervised learning fashion using pairs of clean-corrupted data but crucially without the need of having access to labels for the optimal regularization parameter-maps. We prove consistency of the unrolled scheme by showing that the unrolled energy functional used for the supervised learning Γ-converges as T tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. We apply and evaluate our method on a variety of large scale and dynamic imaging problems in which the automatic computation of such parameters has been so far challenging: 2D dynamic cardiac MRI reconstruction, quantitative brain MRI reconstruction, low-dose CT and dynamic image denoising. The proposed method consistently improves the TV-reconstructions using scalar parameters and the obtained parameter-maps adapt well to each imaging problem and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the proposed algorithm is entirely interpretable since it inherits the properties of the respective iterative reconstruction method from which the network is implicitly defined
    corecore