2,152 research outputs found

    A level-set method for the evolution of cells and tissue during curvature-controlled growth

    Full text link
    Most biological tissues grow by the synthesis of new material close to the tissue's interface, where spatial interactions can exert strong geometric influences on the local rate of growth. These geometric influences may be mechanistic, or cell behavioural in nature. The control of geometry on tissue growth has been evidenced in many in-vivo and in-vitro experiments, including bone remodelling, wound healing, and tissue engineering scaffolds. In this paper, we propose a generalisation of a mathematical model that captures the mechanistic influence of curvature on the joint evolution of cell density and tissue shape during tissue growth. This generalisation allows us to simulate abrupt topological changes such as tissue fragmentation and tissue fusion, as well as three dimensional cases, through a level-set-based method. The level-set method developed introduces another Eulerian field than the level-set function. This additional field represents the surface density of tissue synthesising cells, anticipated at future locations of the interface. Numerical tests performed with this level-set-based method show that numerical conservation of cells is a good indicator of simulation accuracy, particularly when cusps develop in the tissue's interface. We apply this new model to several situations of curvature-controlled tissue evolutions that include fragmentation and fusion.Comment: 15 pages, 10 figures, 3 supplementary figure

    The PSCz Galaxy Power Spectrum Compared to N-Body Simulations

    Get PDF
    By comparing the PSCz galaxy power spectrum with haloes from nested and phased N-body simulations, we try to understand how IRAS infrared-selected galaxies populate dark-matter haloes. We pay special attention to the way we identify haloes in the simulations.Comment: 2 pages, 1 figure, to appear in "The IGM/Galaxy Connection: The Distribution of Baryons at z=0," eds. J.L. Rosenberg and M.E. Putma

    A Search for Gravitational Milli–Lenses

    Get PDF
    We have searched for gravitational milli–lens systems by examining VLBI maps of ~ 300 flat–spectrum radio sources. So far we have followed up 7 candidates, with separations in the range 2–20 mas. None have been confirmed as lenses but several of them can not yet be definitively ruled out. If there are no milli-lenses in this sample then uniformly–distributed black holes of 10^6 to 10^8 M_⊙ cannot contribute more than ~ 1% of the closure density

    Effect of the Berendsen thermostat on dynamical properties of water

    Full text link
    The effect of the Berendsen thermostat on the dynamical properties of bulk SPC/E water is tested by generating power spectra associated with fluctuations in various observables. The Berendsen thermostat is found to be very effective in preserving temporal correlations in fluctuations of tagged particle quantities over a very wide range of frequencies. Even correlations in fluctuations of global properties, such as the total potential energy, are well-preserved for time periods shorter than the thermostat time constant. Deviations in dynamical behaviour from the microcanonical limit do not, however, always decrease smoothly with increasing values of the thermostat time constant but may be somewhat larger for some intermediate values of τB\tau_B, specially in the supercooled regime, which are similar to time scales for slow relaxation processes in bulk water.Comment: 21 pages, 5 figures, To be published in Mol. Phy

    Joining Forces of Bayesian and Frequentist Methodology: A Study for Inference in the Presence of Non-Identifiability

    Full text link
    Increasingly complex applications involve large datasets in combination with non-linear and high dimensional mathematical models. In this context, statistical inference is a challenging issue that calls for pragmatic approaches that take advantage of both Bayesian and frequentist methods. The elegance of Bayesian methodology is founded in the propagation of information content provided by experimental data and prior assumptions to the posterior probability distribution of model predictions. However, for complex applications experimental data and prior assumptions potentially constrain the posterior probability distribution insufficiently. In these situations Bayesian Markov chain Monte Carlo sampling can be infeasible. From a frequentist point of view insufficient experimental data and prior assumptions can be interpreted as non-identifiability. The profile likelihood approach offers to detect and to resolve non-identifiability by experimental design iteratively. Therefore, it allows one to better constrain the posterior probability distribution until Markov chain Monte Carlo sampling can be used securely. Using an application from cell biology we compare both methods and show that a successive application of both methods facilitates a realistic assessment of uncertainty in model predictions.Comment: Article to appear in Phil. Trans. Roy. Soc.

    The effect of symmetry breaking on the dynamics near a structurally stable heteroclinic cycle between equilibria and a periodic orbit

    Get PDF
    The effect of small forced symmetry breaking on the dynamics near a structurally stable heteroclinic cycle connecting two equilibria and a periodic orbit is investigated. This type of system is known to exhibit complicated, possibly chaotic dynamics including irregular switching of sign of various phase space variables, but details of the mechanisms underlying the complicated dynamics have not previously been investigated. We identify global bifurcations that induce the onset of chaotic dynamics and switching near a heteroclinic cycle of this type, and by construction and analysis of approximate return maps, locate the global bifurcations in parameter space. We find there is a threshold in the size of certain symmetry-breaking terms below which there can be no persistent switching. Our results are illustrated by a numerical example

    Line adsorption in a mean-field density functional model

    Get PDF
    Recent ideas about the analog for a three-phase contact line of the Gibbs adsorption equation for interfaces are illustrated in a mean-field density-functional model. With d¥taud¥tau the infinitesimal change in the line tension ¥tau¥tau that accompanies the infinitesimal changes d¥muid¥mu_i in the thermodynamic field variables ¥mui¥mu_i and with ¥Lambdai¥Lambda_i the line adsorptions, the sum d¥tau+¥Sigma¥Lambdaid¥muid¥tau + ¥Sigma ¥Lambda_i d¥mu_i, unlike its surface analog, is not 0. An equivalent of this sum in the model system is evaluated numerically and analytically. A general line adsorption equation, which the model results illustrate, is derived.</p

    A Quantile Variant of the EM Algorithm and Its Applications to Parameter Estimation with Interval Data

    Full text link
    The expectation-maximization (EM) algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The EM is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the EM algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo EM (MCEM) algorithm. The MCEM uses a random sample to estimate the integral at each E-step. However, the problem with the MCEM is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes a computational burden. In this paper, we propose what we refer to as the quantile variant of the EM (QEM) algorithm. We prove that the proposed QEM method has an accuracy of O(1/K2)O(1/K^2) while the MCEM method has an accuracy of Op(1/K)O_p(1/\sqrt{K}). Thus, the proposed QEM method possesses faster and more stable convergence properties when compared with the MCEM algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided

    Kinetic Analysis of Discrete Path Sampling Stationary Point Databases

    Full text link
    Analysing stationary point databases to extract phenomenological rate constants can become time-consuming for systems with large potential energy barriers. In the present contribution we analyse several different approaches to this problem. First, we show how the original rate constant prescription within the discrete path sampling approach can be rewritten in terms of committor probabilities. Two alternative formulations are then derived in which the steady-state assumption for intervening minima is removed, providing both a more accurate kinetic analysis, and a measure of whether a two-state description is appropriate. The first approach involves running additional short kinetic Monte Carlo (KMC) trajectories, which are used to calculate waiting times. Here we introduce `leapfrog' moves to second-neighbour minima, which prevent the KMC trajectory oscillating between structures separated by low barriers. In the second approach we successively remove minima from the intervening set, renormalising the branching probabilities and waiting times to preserve the mean first-passage times of interest. Regrouping the local minima appropriately is also shown to speed up the kinetic analysis dramatically at low temperatures. Applications are described where rates are extracted for databases containing tens of thousands of stationary points, with effective barriers that are several hundred times kT.Comment: 28 pages, 1 figure, 4 table
    corecore