774 research outputs found

    Fast Parallel Deterministic and Randomized Algorithms for Model Checking

    Get PDF
    Model checking is a powerful technique for verification of concurrent systems. One of the potential problems with this technique is state space explosion. There are two ways in which one could cope with state explosion: reducing the search space and searching less space. Most of the existing algorithms are based on the first approach. One of the successful approach for reducing search space uses Binary Decision Diagrams (BDDs) to represent the system. Systems with a large number of states (of the order of 5 x 10 ) have been thus verified. But there are limitations to this heuristic approach. Even systems of reasonable complexity have many more states. Also, the BDD approach might fail even on some simple systems. In this paper we propose the use of parallelism to extend the applicability of BDDs in model checking. In particular we present very fast algorithms for model checking that employ BDDs. The algorithms presented are much faster than the best known previous algorithms. We also describe searching less space as an attractive approach to model checking. In this paper we demonstrate the power of this approach. We also suggest the use of randomization in the design of model checking algorithms

    Counterexample Generation in Probabilistic Model Checking

    Get PDF
    Providing evidence for the refutation of a property is an essential, if not the most important, feature of model checking. This paper considers algorithms for counterexample generation for probabilistic CTL formulae in discrete-time Markov chains. Finding the strongest evidence (i.e., the most probable path) violating a (bounded) until-formula is shown to be reducible to a single-source (hop-constrained) shortest path problem. Counterexamples of smallest size that deviate most from the required probability bound can be obtained by applying (small amendments to) k-shortest (hop-constrained) paths algorithms. These results can be extended to Markov chains with rewards, to LTL model checking, and are useful for Markov decision processes. Experimental results show that typically the size of a counterexample is excessive. To obtain much more compact representations, we present a simple algorithm to generate (minimal) regular expressions that can act as counterexamples. The feasibility of our approach is illustrated by means of two communication protocols: leader election in an anonymous ring network and the Crowds protocol

    Algorithms for testing equivalence of finite automata, with a grading tool for JFLAP

    Get PDF
    A wide variety of algorithms can be used to determine the equivalence of two Deterministic Finite Automata (DFAs) and/or Nondeterministic Finite Automata (NFAs). This project focuses on three key areas: 1. A detailed discussion of several algorithms that can be used to prove the equivalence of two DFAs (and/or NFAs, since every NFA has an equivalent DFA), with an analysis of the time complexity involved in each case. 2. Modifications to a few of these algorithms to produce a \u27witness\u27 string if the two automata are not equivalent. This string is accepted by one of the automata, but not by the other, so it serves as a clear demonstration of why the two automata are inequivalent. 3. A Java implementation of a couple of efficient algorithms to prove equivalence. The code is designed specially to work with JFLAP, the Java Formal Language and Automata Package. JFLAP is a popular program from Duke University which can be used to demonstrate and manipulate models such as finite automata. JFLAP software allows students to enter finite automata via an easy-to-use GUI, and this project incorporates functionality so that instructors can grade homework assignments and/or allow students to receive detailed feedback in the form of a witnes

    An Algorithm to Compute the Character Access Count Distribution for Pattern Matching Algorithms

    Get PDF
    We propose a framework for the exact probabilistic analysis of window-based pattern matching algorithms, such as Boyer--Moore, Horspool, Backward DAWG Matching, Backward Oracle Matching, and more. In particular, we develop an algorithm that efficiently computes the distribution of a pattern matching algorithm's running time cost (such as the number of text character accesses) for any given pattern in a random text model. Text models range from simple uniform models to higher-order Markov models or hidden Markov models (HMMs). Furthermore, we provide an algorithm to compute the exact distribution of \emph{differences} in running time cost of two pattern matching algorithms. Methodologically, we use extensions of finite automata which we call \emph{deterministic arithmetic automata} (DAAs) and \emph{probabilistic arithmetic automata} (PAAs)~\cite{Marschall2008}. Given an algorithm, a pattern, and a text model, a PAA is constructed from which the sought distributions can be derived using dynamic programming. To our knowledge, this is the first time that substring- or suffix-based pattern matching algorithms are analyzed exactly by computing the whole distribution of running time cost. Experimentally, we compare Horspool's algorithm, Backward DAWG Matching, and Backward Oracle Matching on prototypical patterns of short length and provide statistics on the size of minimal DAAs for these computations

    Optimization of the Wire Electric Discharge Machining Process of Nitinol-60 Shape Memory Alloy Using Taguchi-Pareto Design of Experiments, Grey-Wolf Analysis, and Desirability Function Analysis

    Get PDF
    The nitinol-60 shape memory alloy has been rated as the most widely utilized material in real-life industrial applications, including biomedical appliances, coupling and sealing elements, and activators, among others. However, less is known about its optimization characteristics while taking advantage to choose the best parameter in a surface integrity analysis using the wire EDM process. In this research, the authors proposed a robust Taguchi-Pareto (TP)-grey wolf optimization (GWO)-desirability function analysis (DFA) scheme that hybridizes the TP method, GWO approach, and DFA method. The point of coupling of the TP method to the GWO is the introduction of the discriminated signal-to-noise ratios contained in the selected 80-20 Pareto rule of the TP method into the objective function of the GWO, which was converted from multiple responses to a single response accommodated by the GWO. The comparative results of five outputs of the wire EDM process before and after optimization reveals the following understanding. For the CR, a gain of 398% was observed whereas for the outputs named Rz, Rt, SCD, and RLT, losses of 0.0996, 0.0875, 0.0821, and 0.0332 were recorded. This discrimination of signal-to-noise ratio based on the 80-20 rule makes the research different from previous studies, restricting the data fed into the GWO scheme to the most essential to accomplishing the TP-GWO-DFA scheme proposed. The use of the TP-GWO-DFA method is efficient given the limited volume of data required to optimize the wire EDM process parameters of nitinol

    Self-interaction corrected SCAN functional for molecules and solids in the numeric atom-center orbital framework

    Get PDF
    Das „Strongly Constrained and Appropriately Normed“ (SCAN) Austausch-Korrelations-Funktional gehört zur Familie der meta-GGA (generalized gradient approximation) Funktionale. Es gibt aber auch Nachteile Zum einen leiden SCAN Rechnungen oft unter numerischen Instabilitäten, wodurch sehr viele Iteration zum Erreichen von Selbst-Konsistenz benötigt werden. Zum anderen leidet SCAN unter dem von GGA Methoden bekannten Selbstwechselwirkung-Fehler. Im ersten Teil der Arbeit habe ich die numerischen Stabilitätsprobleme in SCAN Rechnungen im Rahmen der numerischen Realraum-Integrationsroutinen im Code FHI-aims untersucht. Diese Analyse zeigt, dass die genannte Probleme durch Anwendung von standardisierten Dichte-Mischalgorithmen für die kinetische Energiedichte abgemildert werden können. Dadurch wird auch in SCAN-Rechnungen eine schnelle und stabile Konvergenz zur selbstkonsistenten Lösung ermöglicht. Im zweiten Teil der Arbeit habe ich untersucht, in welchem Rahmen sich der Selbstwechselwirkung-Fehler in SCAN mittels des von Perdew und Zunger vorgeschlagenen Selbstinteraktionskorrekturalgorithmus (PZ-SIC) verringern lässt. Es wurden aber auch Optimierungen für die PZ-SIC Methode entwickelt. Inspiriert von den ursprünglichen Argumenten in der PZ-SIC-Methode und anderen lokalisierten Methoden, wird in dieser Arbeit eine neuartige Randbedingung (orbital density constraint) vorgeschlagen, die sicherstellt, dass die PZ-SIC Orbitale während des Selbstkonsistenzzyklus lokalisiert bleiben. Dies mildert die Anfangswertabhängigkeit deutlich ab und hilft dabei, in die korrekte selbst-konsistente Lösung mit minimaler Energie zu konvergieren, unabhängig davon ob reelle oder komplexe SIC Orbitale verwendet werden. Die in dieser Arbeit getägtigen Entwicklungen und Untersuchungen sind Wegbereiter dafür, in Zukunft mit SIC-SCAN Rechnungen deutlich genauere ab initio Rechnungen mit nur gering höherem Rechenaufwand durchführen zu können.The state-of-the-art “Strongly Constrained and Appropriately Normed” (SCAN) functional pertains to the family of meta-generalized-gradient approximation (meta-GGA) exchange-correlation functionals. Nonetheless, SCAN suffers from some well-documented deficiencies. In the first part of this thesis, I revisited the known numerical instability problems of the SCAN functional in the context of the numerical, real-space integration framework used in the FHI-aims code. This analysis revealed that applying standard density-mixing algorithms to the kinetic energy density attenuates and largely cures these numerical issues. By this means, SCAN calculations converge towards the self-consistent solution as fast and as efficiently as lower-order GGA calculations. In the second part of the thesis, I investigated strategies to alleviate the self-interaction error in SCAN calculations by using the self-interaction correction algorithm proposed by Perdew and Zunger (PZ-SIC). Inspired by the original arguments in PZ-SIC and other localized methods, I introduced a mathematical constraint, i.e., the orbital density constraint, that forces the orbitals to retain their localization throughout the self-consistency cycle. In turn, this alleviates the multiple-solutions problem and facilitates the convergence towards the correct, lowest-energy solution both for complex and real SIC orbitals. The developments and investigations performed in this thesis pave the road towards a more wide-spread use of SIC-SCAN calculations in the future, allowing more accurate predictions within only moderate increases of computational cost
    • …
    corecore