7,439 research outputs found
Modeling sRNA-regulated Plasmid Maintenance
We study a theoretical model for the toxin-antitoxin (hok/sok) mechanism for
plasmid maintenance in bacteria. Toxin-antitoxin systems enforce the
maintenance of a plasmid through post-segregational killing of cells that have
lost the plasmid. Key to their function is the tight regulation of expression
of a protein toxin by an sRNA antitoxin. Here, we focus on the nonlinear nature
of the regulatory circuit dynamics of the toxin-antitoxin mechanism. The
mechanism relies on a transient increase in protein concentration rather than
on the steady state of the genetic circuit. Through a systematic analysis of
the parameter dependence of this transient increase, we confirm some known
design features of this system and identify new ones: for an efficient
toxin-antitoxin mechanism, the synthesis rate of the toxin's mRNA template
should be lower that of the sRNA antitoxin, the mRNA template should be more
stable than the sRNA antitoxin, and the mRNA-sRNA complex should be more stable
than the sRNA antitoxin. Moreover, a short half-life of the protein toxin is
also beneficial to the function of the toxin-antitoxin system. In addition, we
study a therapeutic scenario in which a competitor mRNA is introduced to
sequester the sRNA antitoxin, causing the toxic protein to be expressed.Comment: 25 pages, 8 figure
A slave mode expansion for obtaining ab-initio interatomic potentials
Here we propose a new approach for performing a Taylor series expansion of
the first-principles computed energy of a crystal as a function of the nuclear
displacements. We enlarge the dimensionality of the existing displacement space
and form new variables (ie. slave modes) which transform like irreducible
representations of the space group and satisfy homogeneity of free space.
Standard group theoretical techniques can then be applied to deduce the
non-zero expansion coefficients a priori. At a given order, the translation
group can be used to contract the products and eliminate terms which are not
linearly independent, resulting in a final set of slave mode products. While
the expansion coefficients can be computed in a variety of ways, we demonstrate
that finite difference is effective up to fourth order. We demonstrate the
power of the method in the strongly anharmonic system PbTe. All anharmonic
terms within an octahedron are computed up to fourth order. A proper unitary
transformation demonstrates that the vast majority of the anharmonicity can be
attributed to just two terms, indicating that a minimal model of phonon
interactions is achievable. The ability to straightforwardly generate
polynomial potentials will allow precise simulations at length and time scales
which were previously unrealizable
NARX-based nonlinear system identification using orthogonal least squares basis hunting
An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance
Recommended from our members
Probability density estimation with tunable kernels using orthogonal forward regression
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately
Recommended from our members
Elastic net prefiltering for two class classification
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model’s generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems
- …