6,779 research outputs found

    Kinetic behavior of the general modifier mechanism of Botts and Morales with non-equilibrium binding

    Full text link
    In this paper, we perform a complete analysis of the kinetic behavior of the general modifier mechanism of Botts and Morales in both equilibrium steady states and non-equilibrium steady states (NESS). Enlightened by the non-equilibrium theory of Markov chains, we introduce the net flux into discussion and acquire an expression of product rate in NESS, which has clear biophysical significance. Up till now, it is a general belief that being an activator or an inhibitor is an intrinsic property of the modifier. However, we reveal that this traditional point of view is based on the equilibrium assumption. A modifier may no longer be an overall activator or inhibitor when the reaction system is not in equilibrium. Based on the regulation of enzyme activity by the modifier concentration, we classify the kinetic behavior of the modifier into three categories, which are named hyperbolic behavior, bell-shaped behavior, and switching behavior, respectively. We show that the switching phenomenon, in which a modifier may convert between an activator and an inhibitor when the modifier concentration varies, occurs only in NESS. Effects of drugs on the Pgp ATPase activity, where drugs may convert from activators to inhibitors with the increase of the drug concentration, are taken as a typical example to demonstrate the occurrence of the switching phenomenon.Comment: 19 pages, 10 figure

    Unifying and Merging Well-trained Deep Neural Networks for Inference Stage

    Full text link
    We propose a novel method to merge convolutional neural-nets for the inference stage. Given two well-trained networks that may have different architectures that handle different tasks, our method aligns the layers of the original networks and merges them into a unified model by sharing the representative codes of weights. The shared weights are further re-trained to fine-tune the performance of the merged model. The proposed method effectively produces a compact model that may run original tasks simultaneously on resource-limited devices. As it preserves the general architectures and leverages the co-used weights of well-trained networks, a substantial training overhead can be reduced to shorten the system development time. Experimental results demonstrate a satisfactory performance and validate the effectiveness of the method.Comment: To appear in the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence, 2018. (IJCAI-ECAI 2018
    • …
    corecore