11 research outputs found

    Descriptional complexity of cellular automata and decidability questions

    Get PDF
    We study the descriptional complexity of cellular automata (CA), a parallel model of computation. We show that between one of the simplest cellular models, the realtime-OCA. and "classical" models like deterministic finite automata (DFA) or pushdown automata (PDA), there will be savings concerning the size of description not bounded by any recursive function, a so-called nonrecursive trade-off. Furthermore, nonrecursive trade-offs are shown between some restricted classes of cellular automata. The set of valid computations of a Turing machine can be recognized by a realtime-OCA. This implies that many decidability questions are not even semi decidable for cellular automata. There is no pumping lemma and no minimization algorithm for cellular automata

    Sublinearly space bounded iterative arrays

    Get PDF
    Iterative arrays (IAs) are a, parallel computational model with a sequential processing of the input. They are one-dimensional arrays of interacting identical deterministic finite automata. In this note, realtime-lAs with sublinear space bounds are used to accept formal languages. The existence of a proper hierarchy of space complexity classes between logarithmic anel linear space bounds is proved. Furthermore, an optimal spacc lower bound for non-regular language recognition is shown. Key words: Iterative arrays, cellular automata, space bounded computations, decidability questions, formal languages, theory of computatio

    On one-way cellular automata with a fixed number of cells

    Get PDF
    We investigate a restricted one-way cellular automaton (OCA) model where the number of cells is bounded by a constant number k, so-called kC-OCAs. In contrast to the general model, the generative capacity of the restricted model is reduced to the set of regular languages. A kC-OCA can be algorithmically converted to a deterministic finite automaton (DFA). The blow-up in the number of states is bounded by a polynomial of degree k. We can exhibit a family of unary languages which shows that this upper bound is tight in order of magnitude. We then study upper and lower bounds for the trade-off when converting DFAs to kC-OCAs. We show that there are regular languages where the use of kC-OCAs cannot reduce the number of states when compared to DFAs. We then investigate trade-offs between kC-OCAs with different numbers of cells and finally treat the problem of minimizing a given kC-OCA

    Minimizing finite automata is computationally hard

    Get PDF
    It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems

    Biconditional Binary Decision Diagrams: A Novel Canonical Logic Representation Form

    Get PDF
    In this paper, we present biconditional binary deci- sion diagrams (BBDDs), a novel canonical representation form for Boolean functions. BBDDs are binary decision diagrams where the branching condition, and its associated logic expansion, is biconditional on two variables. Empowered by reduction and ordering rules, BBDDs are remarkably compact and unique for a Boolean function. The interest of such representation form in modern electronic design automation (EDA) is twofold. On the one hand, BBDDs improve the efficiency of traditional EDA tasks based on decision diagrams, especially for arithmetic intensive designs. On the other hand, BBDDs represent the natural and native design abstraction for emerging technologies where the circuit primitive is a comparator, rather than a simple switch. We provide, in this paper, a solid ground for BBDDs by studying their underlying theory and manipulation properties. Thanks to an efficient BBDD software package implementation, we validate 1) speed-up in traditional decision diagrams applications with up to 4.4 gain with respect to other DDs, and 2) improved synthesis of circuits in emerging technologies, with about 32% shorter critical path than state-of-art synthesis techniques

    Reasoning about LTL Synthesis over finite and infinite games

    Get PDF
    In the last few years, research formal methods for the analysis and the verification of properties of systems has increased greatly. A meaningful contribution in this area has been given by algorithmic methods developed in the context of synthesis. The basic idea is simple and appealing: instead of developing a system and verifying that it satisfies its specification, we look for an automated procedure that, given the specification returns a system that is correct by construction. Synthesis of reactive systems is one of the most popular variants of this problem, in which we want to synthesize a system characterized by an ongoing interaction with the environment. In this setting, large effort has been devoted to analyze specifications given as formulas of linear temporal logic, i.e., LTL synthesis. Traditional approaches to LTL synthesis rely on transforming the LTL specification into parity deterministic automata, and then to parity games, for which a so-called winning region is computed. Computing such an automaton is, in the worst-case, double-exponential in the size of the LTL formula, and this becomes a computational bottleneck in using the synthesis process in practice. The first part of this thesis is devoted to improve the solution of parity games as they are used in solving LTL synthesis, trying to give efficient techniques, in terms of running time and space consumption, for solving parity games. We start with the study and the implementation of an automata-theoretic technique to solve parity games. More precisely, we consider an algorithm introduced by Kupferman and Vardi that solves a parity game by solving the emptiness problem of a corresponding alternating parity automaton. Our empirical evaluation demonstrates that this algorithm outperforms other algorithms when the game has a small number of priorities relative to the size of the game. In many concrete applications, we do indeed end up with parity games where the number of priorities is relatively small. This makes the new algorithm quite useful in practice. We then provide a broad investigation of the symbolic approach for solving parity games. Specifically, we implement in a fresh tool, called SPGSolver, four symbolic algorithms to solve parity games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been. LTL synthesis has been largely investigated also in artificial intelligence, and specifically in automated planning. Indeed, LTL synthesis corresponds to fully observable nondeterministic planning in which the domain is given compactly and the goal is an LTL formula, that in turn is related to two-player games with LTL goals. Finding a strategy for these games means to synthesize a plan for the planning problem. The last part of this thesis is then dedicated to investigate LTL synthesis under this different view. In particular, we study a generalized form of planning under partial observability, in which we have multiple, possibly infinitely many, planning domains with the same actions and observations, and goals expressed over observations, which are possibly temporally extended. By building on work on two-player games with imperfect information in the Formal Methods literature, we devise a general technique, generalizing the belief-state construction, to remove partial observability. This reduces the planning problem to a game of perfect information with a tight correspondence between plans and strategies. Then we instantiate the technique and solve some generalized planning problems

    Automated synthesis and optimization of multilevel logic circuits.

    Get PDF
    With the increased complexity of Very Large Scaled Integrated (VLSI) circuits, multilevellogic synthesis plays an even more important role due to its flexibility and compactness.The history of symbolic logic and some typical techniques for multilevel logic synthesisare reviewed. These methods include algorithmic approach; Rule-Based approach; BinaryDecision Diagram (BDD) approach; Field Programmable Gate Array(FPGA) approachand several perturbation applications.One new kind of don't cares (DCs), called functional DCs has been proposed for multilevellogic synthesis. The conventional two-level cubes are generalized to multilevel cubes.Then functional DCs are generated based on the properties of containment. The conceptof containment is more general than unateness which leads to the generation of newDCs. A separate C program has been developed to utilize the functional DCs generatedas a Boolean function is decomposed for both single output and multiple output functions.The program can produce better results than script.rugged of SIS, developed by UC Berkeley,both in area and speed in less CPU time for a number of testcases from MCNC andIWLS'93 benchmarks.In certain applications ANDjXOR (Reed-Muller) logic has shown some attractive advantagesover the standard Boolean logic based on AND JOR operations. A bidirectionalconversion algorithm between these two paradigms is presented based on the concept of polarityfor sum-of-products (SOP) Boolean functions, multiple segment and multiple pointerfacilities. Experimental results show that the algorithm is much faster than the previouslypublished programs for any fixed polarity. Based on this algorithm, a new technique calledredundancy-removal is applied to generalize the idea to very large multiple output Booleanfunctions. Results for benchmarks with up to 199 inputs and 99 outputs are presented.Applying the preceding conversion program, any Boolean functions can be expressedby fixed polarity Reed-Muller forms. There are 2n polarities for an n-variable function andthe number of product terms depends on these polarities. The problem of exact polarityminimization is computationally extensive and current programs are only suitable whenn :::; 15. Based on the comparison of the concepts of polarity in the standard Boolean logicand Reed-Muller logic, a fast algorithm is developed and implemented in C language whichcan find the best polarity for multiple output functions. Benchmark examples of up to 25inputs and 29 outputs run on a personal computer are given.After the best polarity for a Boolean function is calculated, this function can be furthersimplified using mixed polarity methods by combining the adjacent product terms. Hence,an efficient program is developed based on decomposition strategy to implement mixedpolarity minimization for both single output and very large multiple output Boolean functions.Experimental results show that the numbers of product terms are much less thanthe results produced by ESPRESSO for some categories of functions

    New Data Structures and Algorithms for Logic Synthesis and Verification

    Get PDF
    The strong interaction between Electronic Design Automation (EDA) tools and Complementary Metal-Oxide Semiconductor (CMOS) technology contributed substantially to the advancement of modern digital electronics. The continuous downscaling of CMOS Field Effect Transistor (FET) dimensions enabled the semiconductor industry to fabricate digital systems with higher circuit density at reduced costs. To keep pace with technology, EDA tools are challenged to handle both digital designs with growing functionality and device models of increasing complexity. Nevertheless, whereas the downscaling of CMOS technology is requiring more complex physical design models, the logic abstraction of a transistor as a switch has not changed even with the introduction of 3D FinFET technology. As a consequence, modern EDA tools are fine tuned for CMOS technology and the underlying design methodologies are based on CMOS logic primitives, i.e., negative unate logic functions. While it is clear that CMOS logic primitives will be the ultimate building blocks for digital systems in the next ten years, no evidence is provided that CMOS logic primitives are also the optimal basis for EDA software. In EDA, the efficiency of methods and tools is measured by different metrics such as (i) the result quality, for example the performance of a digital circuit, (ii) the runtime and (iii) the memory footprint on the host computer. With the aim to optimize these metrics, the accordance to a specific logic model is no longer important. Indeed, the key to the success of an EDA technique is the expressive power of the logic primitives handling and solving the problem, which determines the capability to reach better metrics. In this thesis, we investigate new logic primitives for electronic design automation tools. We improve the efficiency of logic representation, manipulation and optimization tasks by taking advantage of majority and biconditional logic primitives. We develop synthesis tools exploiting the majority and biconditional expressiveness. Our tools show strong results as compared to state-of-the-art academic and commercial synthesis tools. Indeed, we produce the best results for several public benchmarks. On top of the enhanced synthesis power, our methods are the natural and native logic abstraction for circuit design in emerging nanotechnologies, where majority and biconditional logic are the primitive gates for physical implementation. We accelerate formal methods by (i) studying properties of logic circuits and (ii) developing new frameworks for logic reasoning engines. We prove non-trivial dualities for the property checking problem in logic circuits. Our findings enable sensible speed-ups in solving circuit satisfiability. We develop an alternative Boolean satisfiability framework based on majority functions. We prove that the general problem is still intractable but we show practical restrictions that can be solved efficiently. Finally, we focus on reversible logic where we propose a new equivalence checking approach. We exploit the invertibility of computation and the functionality of reversible gates in the formulation of the problem. This enables one order of magnitude speed up, as compared to the state-of-the-art solution. We argue that new approaches to solve EDA problems are necessary, as we have reached a point of technology where keeping pace with design goals is tougher than ever
    corecore