7,017 research outputs found

    Hitting Times in Markov Chains with Restart and their Application to Network Centrality

    Get PDF
    Motivated by applications in telecommunications, computer scienceand physics, we consider a discrete-time Markov process withrestart. At each step the process eitherwith a positive probability restarts from a given distribution, orwith the complementary probability continues according to a Markovtransition kernel. The main contribution of the present work is thatwe obtain an explicit expression for the expectation of the hittingtime (to a given target set) of the process with restart.The formula is convenient when considering the problem of optimizationof the expected hitting time with respect to the restart probability.We illustrate our results with two examplesin uncountable and countable state spaces andwith an application to network centrality

    Analysis of RSVP-TE graceful restart

    Get PDF
    GMPLS is viewed as an attractive intelligent control plane for different network technologies and graceful restart is a key technique in ensuring this control plane is resilient and able to recover adequately from faults. This paper analyses the graceful restart mechanism proposed for a key GMPLS protocol, RSVP-TE. A novel analytical model, which may be readily adapted to study other protocols, is developed. This model allows the efficacy of graceful restart to be evaluated in a number of scenarios. It is found that, unsurprisingly, increasing control message loss and increasing the number of data plane connections both increased the time to complete recovery. It was also discovered that a threshold exists beyond which a relatively small change in the control message loss probability causes a disproportionately large increase in the time to complete recovery. The interesting findings in this work suggest that the performance of graceful restart is worthy of further investigation, with emphasis being placed on exploring procedures to optimise the performance of graceful restart

    CSL model checking of Deterministic and Stochastic Petri Nets

    Get PDF
    Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under certain restrictions, corresponds to a class of Markov Regenerative Stochastic Processes (MRGP). In this paper, we investigate the use of CSL (Continuous Stochastic Logic) to express probabilistic properties, such a time-bounded until and time-bounded next, at the DSPN level. The verication of such properties requires the solution of the steady-state and transient probabilities of the underlying MRGP. We also address a number of semantic issues regarding the application of CSL on MRGP and provide numerical model checking algorithms for this logic. A prototype model checker, based on SPNica, is also described

    Schwinger-Dyson equations in large-N quantum field theories and nonlinear random processes

    Full text link
    We propose a stochastic method for solving Schwinger-Dyson equations in large-N quantum field theories. Expectation values of single-trace operators are sampled by stationary probability distributions of the so-called nonlinear random processes. The set of all histories of such processes corresponds to the set of all planar diagrams in the perturbative expansions of the expectation values of singlet operators. We illustrate the method on the examples of the matrix-valued scalar field theory and the Weingarten model of random planar surfaces on the lattice. For theories with compact field variables, such as sigma-models or non-Abelian lattice gauge theories, the method does not converge in the physically most interesting weak-coupling limit. In this case one can absorb the divergences into a self-consistent redefinition of expansion parameters. Stochastic solution of the self-consistency conditions can be implemented as a "memory" of the random process, so that some parameters of the process are estimated from its previous history. We illustrate this idea on the example of two-dimensional O(N) sigma-model. Extension to non-Abelian lattice gauge theories is discussed.Comment: 16 pages RevTeX, 14 figures; v2: Algorithm for the Weingarten model corrected; v3: published versio

    Asymptotic shape for the contact process in random environment

    Get PDF
    The aim of this article is to prove asymptotic shape theorems for the contact process in stationary random environment. These theorems generalize known results for the classical contact process. In particular, if H_t denotes the set of already occupied sites at time t, we show that for almost every environment, when the contact process survives, the set H_t/t almost surely converges to a compact set that only depends on the law of the environment. To this aim, we prove a new almost subadditive ergodic theorem.Comment: Published in at http://dx.doi.org/10.1214/11-AAP796 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Scaling symmetry, renormalization, and time series modeling

    Full text link
    We present and discuss a stochastic model of financial assets dynamics based on the idea of an inverse renormalization group strategy. With this strategy we construct the multivariate distributions of elementary returns based on the scaling with time of the probability density of their aggregates. In its simplest version the model is the product of an endogenous auto-regressive component and a random rescaling factor designed to embody also exogenous influences. Mathematical properties like increments' stationarity and ergodicity can be proven. Thanks to the relatively low number of parameters, model calibration can be conveniently based on a method of moments, as exemplified in the case of historical data of the S&P500 index. The calibrated model accounts very well for many stylized facts, like volatility clustering, power law decay of the volatility autocorrelation function, and multiscaling with time of the aggregated return distribution. In agreement with empirical evidence in finance, the dynamics is not invariant under time reversal and, with suitable generalizations, skewness of the return distribution and leverage effects can be included. The analytical tractability of the model opens interesting perspectives for applications, for instance in terms of obtaining closed formulas for derivative pricing. Further important features are: The possibility of making contact, in certain limits, with auto-regressive models widely used in finance; The possibility of partially resolving the long-memory and short-memory components of the volatility, with consistent results when applied to historical series.Comment: Main text (17 pages, 13 figures) plus Supplementary Material (16 pages, 5 figures

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
    corecore