251 research outputs found

    Calculating potentials of mean force and diffusion coefficients from nonequilibirum processes without Jarzynski's equality

    Full text link
    In general, the direct application of the Jarzynski equality (JE) to reconstruct potentials of mean force (PMFs) from a small number of nonequilibrium unidirectional steered molecular dynamics (SMD) paths is hindered by the lack of sampling of extremely rare paths with negative dissipative work. Such trajectories, that transiently violate the second law, are crucial for the validity of JE. As a solution to this daunting problem, we propose a simple and efficient method, referred to as the FR method, for calculating simultaneously both the PMF U(z) and the corresponding diffusion coefficient D(z) along a reaction coordinate z for a classical many particle system by employing a small number of fast SMD pullings in both forward (F) and time reverse (R) directions, without invoking JE. By employing Crook's transient fluctuation theorem (that is more general than JE) and the stiff spring approximation, we show that: (i) the mean dissipative work W_d in the F and R pullings are equal, (ii) both U(z) and W_d can be expressed in terms of the easily calculable mean work of the F and R processes, and (iii) D(z) can be expressed in terms of the slope of W_d. To test its viability, the FR method is applied to determine U(z) and D(z) of single-file water molecules in single-walled carbon nanotubes (SWNTs). The obtained U(z) is found to be in very good agreement with the results from other PMF calculation methods, e.g., umbrella sampling. Finally, U(z) and D(z) are used as input in a stochastic model, based on the Fokker-Planck equation, for describing water transport through SWNTs on a mesoscopic time scale that in general is inaccessible to MD simulations.Comment: ReVTeX4, 13 pages, 6 EPS figures, Submitted to Journal of Chemical Physic

    Kinetic Monte Carlo and Cellular Particle Dynamics Simulations of Multicellular Systems

    Full text link
    Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Computer simulations based on Metropolis Monte Carlo (MMC) algorithms were successful in explaining and predicting the resulting stationary structures (corresponding to the lowest adhesion energy state). Here we present two alternatives to the MMC approach for modeling cellular motion and self-assembly: (1) a kinetic Monte Carlo (KMC), and (2) a cellular particle dynamics (CPD) method. Unlike MMC, both KMC and CPD methods are capable of simulating the dynamics of the cellular system in real time. In the KMC approach a transition rate is associated with possible rearrangements of the cellular system, and the corresponding time evolution is expressed in terms of these rates. In the CPD approach cells are modeled as interacting cellular particles (CPs) and the time evolution of the multicellular system is determined by integrating the equations of motion of all CPs. The KMC and CPD methods are tested and compared by simulating two experimentally well known phenomena: (1) cell-sorting within an aggregate formed by two types of cells with different adhesivities, and (2) fusion of two spherical aggregates of living cells.Comment: 11 pages, 7 figures; submitted to Phys Rev

    A probabilistic approach to Zhang's sandpile model

    Get PDF
    The current literature on sandpile models mainly deals with the abelian sandpile model (ASM) and its variants. We treat a less known - but equally interesting - model, namely Zhang's sandpile. This model differs in two aspects from the ASM. First, additions are not discrete, but random amounts with a uniform distribution on an interval [a,b][a,b]. Second, if a site topples - which happens if the amount at that site is larger than a threshold value EcE_c (which is a model parameter), then it divides its entire content in equal amounts among its neighbors. Zhang conjectured that in the infinite volume limit, this model tends to behave like the ASM in the sense that the stationary measure for the system in large volumes tends to be peaked narrowly around a finite set. This belief is supported by simulations, but so far not by analytical investigations. We study the stationary distribution of this model in one dimension, for several values of aa and bb. When there is only one site, exact computations are possible. Our main result concerns the limit as the number of sites tends to infinity, in the one-dimensional case. We find that the stationary distribution, in the case aEc/2a \geq E_c/2, indeed tends to that of the ASM (up to a scaling factor), in agreement with Zhang's conjecture. For the case a=0a=0, b=1b=1 we provide strong evidence that the stationary expectation tends to 1/2\sqrt{1/2}.Comment: 47 pages, 3 figure

    Algorithm Selection Framework for Cyber Attack Detection

    Full text link
    The number of cyber threats against both wired and wireless computer systems and other components of the Internet of Things continues to increase annually. In this work, an algorithm selection framework is employed on the NSL-KDD data set and a novel paradigm of machine learning taxonomy is presented. The framework uses a combination of user input and meta-features to select the best algorithm to detect cyber attacks on a network. Performance is compared between a rule-of-thumb strategy and a meta-learning strategy. The framework removes the conjecture of the common trial-and-error algorithm selection method. The framework recommends five algorithms from the taxonomy. Both strategies recommend a high-performing algorithm, though not the best performing. The work demonstrates the close connectedness between algorithm selection and the taxonomy for which it is premised.Comment: 6 pages, 7 figures, 1 table, accepted to WiseML '2

    Statistics of extremal intensities for Gaussian interfaces

    Full text link
    The extremal Fourier intensities are studied for stationary Edwards-Wilkinson-type, Gaussian, interfaces with power-law dispersion. We calculate the probability distribution of the maximal intensity and find that, generically, it does not coincide with the distribution of the integrated power spectrum (i.e. roughness of the surface), nor does it obey any of the known extreme statistics limit distributions. The Fisher-Tippett-Gumbel limit distribution is, however, recovered in three cases: (i) in the non-dispersive (white noise) limit, (ii) for high dimensions, and (iii) when only short-wavelength modes are kept. In the last two cases the limit distribution emerges in novel scenarios.Comment: 15 pages, including 7 ps figure

    Simulation study of the inhomogeneous Olami-Feder-Christensen model of earthquakes

    Full text link
    Statistical properties of the inhomogeneous version of the Olami-Feder-Christensen (OFC) model of earthquakes is investigated by numerical simulations. The spatial inhomogeneity is assumed to be dynamical. Critical features found in the original homogeneous OFC model, e.g., the Gutenberg-Richter law and the Omori law are often weakened or suppressed in the presence of inhomogeneity, whereas the characteristic features found in the original homogeneous OFC model, e.g., the near-periodic recurrence of large events and the asperity-like phenomena persist.Comment: Shortened from the first version. To appear in European Physical Journal

    Prevention of bronchial hyperreactivity in a rat model of precapillary pulmonary hypertension

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The development of bronchial hyperreactivity (BHR) subsequent to precapillary pulmonary hypertension (PHT) was prevented by acting on the major signalling pathways (endothelin, nitric oxide, vasoactive intestine peptide (VIP) and prostacyclin) involved in the control of the pulmonary vascular and bronchial tones.</p> <p>Methods</p> <p>Five groups of rats underwent surgery to prepare an aorta-caval shunt (ACS) to induce sustained precapillary PHT for 4 weeks. During this period, no treatment was applied in one group (ACS controls), while the other groups were pretreated with VIP, iloprost, tezosentan via an intraperitoneally implemented osmotic pump, or by orally administered sildenafil. An additional group underwent sham surgery. Four weeks later, the lung responsiveness to increasing doses of an intravenous infusion of methacholine (2, 4, 8 12 and 24 μg/kg/min) was determined by using the forced oscillation technique to assess the airway resistance (Raw).</p> <p>Results</p> <p>BHR developed in the untreated rats, as reflected by a significant decrease in ED<sub>50</sub>, the equivalent dose of methacholine required to cause a 50% increase in Raw. All drugs tested prevented the development of BHR, iloprost being the most effective in reducing both the systolic pulmonary arterial pressure (Ppa; 28%, p = 0.035) and BHR (ED<sub>50 </sub>= 9.9 ± 1.7 vs. 43 ± 11 μg/kg in ACS control and iloprost-treated rats, respectively, p = 0.008). Significant correlations were found between the levels of Ppa and ED<sub>50 </sub>(R = -0.59, p = 0.016), indicating that mechanical interdependence is primarily responsible for the development of BHR.</p> <p>Conclusions</p> <p>The efficiency of such treatment demonstrates that re-establishment of the balance of constrictor/dilator mediators via various signalling pathways involved in PHT is of potential benefit for the avoidance of the development of BHR.</p

    Effect of Trends on Detrended Fluctuation Analysis

    Get PDF
    Detrended fluctuation analysis (DFA) is a scaling analysis method used to estimate long-range power-law correlation exponents in noisy signals. Many noisy signals in real systems display trends, so that the scaling results obtained from the DFA method become difficult to analyze. We systematically study the effects of three types of trends -- linear, periodic, and power-law trends, and offer examples where these trends are likely to occur in real data. We compare the difference between the scaling results for artificially generated correlated noise and correlated noise with a trend, and study how trends lead to the appearance of crossovers in the scaling behavior. We find that crossovers result from the competition between the scaling of the noise and the ``apparent'' scaling of the trend. We study how the characteristics of these crossovers depend on (i) the slope of the linear trend; (ii) the amplitude and period of the periodic trend; (iii) the amplitude and power of the power-law trend and (iv) the length as well as the correlation properties of the noise. Surprisingly, we find that the crossovers in the scaling of noisy signals with trends also follow scaling laws -- i.e. long-range power-law dependence of the position of the crossover on the parameters of the trends. We show that the DFA result of noise with a trend can be exactly determined by the superposition of the separate results of the DFA on the noise and on the trend, assuming that the noise and the trend are not correlated. If this superposition rule is not followed, this is an indication that the noise and the superimposed trend are not independent, so that removing the trend could lead to changes in the correlation properties of the noise.Comment: 20 pages, 16 figure
    corecore