6,880 research outputs found
Towards automatic Markov reliability modeling of computer architectures
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation
Entropy: The Markov Ordering Approach
The focus of this article is on entropy and Markov processes. We study the
properties of functionals which are invariant with respect to monotonic
transformations and analyze two invariant "additivity" properties: (i)
existence of a monotonic transformation which makes the functional additive
with respect to the joining of independent systems and (ii) existence of a
monotonic transformation which makes the functional additive with respect to
the partitioning of the space of states. All Lyapunov functionals for Markov
chains which have properties (i) and (ii) are derived. We describe the most
general ordering of the distribution space, with respect to which all
continuous-time Markov processes are monotonic (the {\em Markov order}). The
solution differs significantly from the ordering given by the inequality of
entropy growth. For inference, this approach results in a convex compact set of
conditionally "most random" distributions.Comment: 50 pages, 4 figures, Postprint version. More detailed discussion of
the various entropy additivity properties and separation of variables for
independent subsystems in MaxEnt problem is added in Section 4.2.
Bibliography is extende
Model-based dependability analysis : state-of-the-art, challenges and future outlook
Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis
Event-Driven Monte Carlo: exact dynamics at all time-scales for discrete-variable models
We present an algorithm for the simulation of the exact real-time dynamics of
classical many-body systems with discrete energy levels. In the same spirit of
kinetic Monte Carlo methods, a stochastic solution of the master equation is
found, with no need to define any other phase-space construction. However,
unlike existing methods, the present algorithm does not assume any particular
statistical distribution to perform moves or to advance the time, and thus is a
unique tool for the numerical exploration of fast and ultra-fast dynamical
regimes. By decomposing the problem in a set of two-level subsystems, we find a
natural variable step size, that is well defined from the normalization
condition of the transition probabilities between the levels. We successfully
test the algorithm with known exact solutions for non-equilibrium dynamics and
equilibrium thermodynamical properties of Ising-spin models in one and two
dimensions, and compare to standard implementations of kinetic Monte Carlo
methods. The present algorithm is directly applicable to the study of the real
time dynamics of a large class of classical markovian chains, and particularly
to short-time situations where the exact evolution is relevant
Automatic specification of reliability models for fault-tolerant computers
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model
Asymptotology of Chemical Reaction Networks
The concept of the limiting step is extended to the asymptotology of
multiscale reaction networks. Complete theory for linear networks with well
separated reaction rate constants is developed. We present algorithms for
explicit approximations of eigenvalues and eigenvectors of kinetic matrix.
Accuracy of estimates is proven. Performance of the algorithms is demonstrated
on simple examples. Application of algorithms to nonlinear systems is
discussed.Comment: 23 pages, 8 figures, 84 refs, Corrected Journal Versio
High-level Counterexamples for Probabilistic Automata
Providing compact and understandable counterexamples for violated system
properties is an essential task in model checking. Existing works on
counterexamples for probabilistic systems so far computed either a large set of
system runs or a subset of the system's states, both of which are of limited
use in manual debugging. Many probabilistic systems are described in a guarded
command language like the one used by the popular model checker PRISM. In this
paper we describe how a smallest possible subset of the commands can be
identified which together make the system erroneous. We additionally show how
the selected commands can be further simplified to obtain a well-understandable
counterexample
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
The ability to integrate information in the brain is considered to be an
essential property for cognition and consciousness. Integrated Information
Theory (IIT) hypothesizes that the amount of integrated information () in
the brain is related to the level of consciousness. IIT proposes that to
quantify information integration in a system as a whole, integrated information
should be measured across the partition of the system at which information loss
caused by partitioning is minimized, called the Minimum Information Partition
(MIP). The computational cost for exhaustively searching for the MIP grows
exponentially with system size, making it difficult to apply IIT to real neural
data. It has been previously shown that if a measure of satisfies a
mathematical property, submodularity, the MIP can be found in a polynomial
order by an optimization algorithm. However, although the first version of
is submodular, the later versions are not. In this study, we empirically
explore to what extent the algorithm can be applied to the non-submodular
measures of by evaluating the accuracy of the algorithm in simulated
data and real neural data. We find that the algorithm identifies the MIP in a
nearly perfect manner even for the non-submodular measures. Our results show
that the algorithm allows us to measure in large systems within a
practical amount of time
- …