43,446 research outputs found

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Estimating population means in covariance stationary process

    Get PDF
    In simple random sampling, the basic assumption at the stage of estimating the standard error of the sample mean and constructing the corresponding confidence interval for the population mean is that the observations in the sample must be independent. In a number of cases, however, the validity of this assumption is under question, and as examples we mention the cases of generating dependent quantities in Jackknife estimation, or the evolution through time of a social quantitative indicator in longitudinal studies. For the case of covariance stationary processes, in this paper we explore the consequences of estimating the standard error of the sample mean using however the classical way based on the independence assumption. As criteria we use the degree of bias in estimating the standard error, and the actual confidence level attained by the confidence interval, that is, the actual probability the interval to contain the true mean. These two criteria are computed analytically under different sample sizes in the stationary ARMA(1,1) process, which can generate different forms of autocorrelation structure between observations at different lags.Jackknife estimation; ARMA; Longitudinal data; Actual confidence level

    Simultaneous computation of dynamical and equilibrium information using a weighted ensemble of trajectories

    Get PDF
    Equilibrium formally can be represented as an ensemble of uncoupled systems undergoing unbiased dynamics in which detailed balance is maintained. Many non-equilibrium processes can be described by suitable subsets of the equilibrium ensemble. Here, we employ the "weighted ensemble" (WE) simulation protocol [Huber and Kim, Biophys. J., 1996] to generate equilibrium trajectory ensembles and extract non-equilibrium subsets for computing kinetic quantities. States do not need to be chosen in advance. The procedure formally allows estimation of kinetic rates between arbitrary states chosen after the simulation, along with their equilibrium populations. We also describe a related history-dependent matrix procedure for estimating equilibrium and non-equilibrium observables when phase space has been divided into arbitrary non-Markovian regions, whether in WE or ordinary simulation. In this proof-of-principle study, these methods are successfully applied and validated on two molecular systems: explicitly solvated methane association and the implicitly solvated Ala4 peptide. We comment on challenges remaining in WE calculations

    Stochastic Analysis of the LMS Algorithm for System Identification with Subspace Inputs

    Get PDF
    This paper studies the behavior of the low rank LMS adaptive algorithm for the general case in which the input transformation may not capture the exact input subspace. It is shown that the Independence Theory and the independent additive noise model are not applicable to this case. A new theoretical model for the weight mean and fluctuation behaviors is developed which incorporates the correlation between successive data vectors (as opposed to the Independence Theory model). The new theory is applied to a network echo cancellation scheme which uses partial-Haar input vector transformations. Comparison of the new model predictions with Monte Carlo simulations shows good-to-excellent agreement, certainly much better than predicted by the Independence Theory based model available in the literature

    Coarse Brownian Dynamics for Nematic Liquid Crystals: Bifurcation Diagrams via Stochastic Simulation

    Full text link
    We demonstrate how time-integration of stochastic differential equations (i.e. Brownian dynamics simulations) can be combined with continuum numerical bifurcation analysis techniques to analyze the dynamics of liquid crystalline polymers (LCPs). Sidestepping the necessity of obtaining explicit closures, the approach analyzes the (unavailable in closed form) coarse macroscopic equations, estimating the necessary quantities through appropriately initialized, short bursts of Brownian dynamics simulation. Through this approach, both stable and unstable branches of the equilibrium bifurcation diagram are obtained for the Doi model of LCPs and their coarse stability is estimated. Additional macroscopic computational tasks enabled through this approach, such as coarse projective integration and coarse stabilizing controller design, are also demonstrated

    The formation of permanent soft binaries in dispersing clusters

    Full text link
    Wide, fragile binary stellar systems are found in the galactic field, and have recently been noted in the outskirts of expanding star clusters in numerical simulations. Energetically soft, with semi-major axes exceeding the initial size of their birth cluster, it is puzzling how these binaries are created and preserved. We provide an interpretation of the formation of these binaries that explains the total number formed and their distribution of energies. A population of weakly bound binaries can always be found in the cluster, in accordance with statistical detailed balance, limited at the soft end only by the current size of the cluster and whatever observational criteria are imposed. At any given time, the observed soft binary distribution is predominantly a snapshot of a transient population. However, there is a constantly growing population of long-lived soft binaries that are removed from the detailed balance cycle due to the changing density and velocity dispersion of an expanding cluster. The total number of wide binaries that form, and their energy distribution, are insensitive to the cluster population; the number is approximately one per cluster. This suggests that a population composed of many dissolved small-N clusters will more efficiently populate the field with wide binaries than that composed of dissolved large-N clusters. Locally such binaries are present at approximately the 2% level; thus the production rate is consistent with the field being populated by clusters with a median of a few hundred stars rather than a few thousand.Comment: 10 pages, accepted to MNRA
    corecore