2,197 research outputs found

    One-way permutations, computational asymmetry and distortion

    Get PDF
    Computational asymmetry, i.e., the discrepancy between the complexity of transformations and the complexity of their inverses, is at the core of one-way transformations. We introduce a computational asymmetry function that measures the amount of one-wayness of permutations. We also introduce the word-length asymmetry function for groups, which is an algebraic analogue of computational asymmetry. We relate boolean circuits to words in a Thompson monoid, over a fixed generating set, in such a way that circuit size is equal to word-length. Moreover, boolean circuits have a representation in terms of elements of a Thompson group, in such a way that circuit size is polynomially equivalent to word-length. We show that circuits built with gates that are not constrained to have fixed-length inputs and outputs, are at most quadratically more compact than circuits built from traditional gates (with fixed-length inputs and outputs). Finally, we show that the computational asymmetry function is closely related to certain distortion functions: The computational asymmetry function is polynomially equivalent to the distortion of the path length in Schreier graphs of certain Thompson groups, compared to the path length in Cayley graphs of certain Thompson monoids. We also show that the results of Razborov and others on monotone circuit complexity lead to exponential lower bounds on certain distortions.Comment: 33 page

    Circuit Size Lower Bounds and #SAT Upper Bounds Through a General Framework

    Get PDF
    Most of the known lower bounds for binary Boolean circuits with unrestricted depth are proved by the gate elimination method. The most efficient known algorithms for the #SAT problem on binary Boolean circuits use similar case analyses to the ones in gate elimination. Chen and Kabanets recently showed that the known case analyses can also be used to prove average case circuit lower bounds, that is, lower bounds on the size of approximations of an explicit function. In this paper, we provide a general framework for proving worst/average case lower bounds for circuits and upper bounds for #SAT that is built on ideas of Chen and Kabanets. A proof in such a framework goes as follows. One starts by fixing three parameters: a class of circuits, a circuit complexity measure, and a set of allowed substitutions. The main ingredient of a proof goes as follows: by going through a number of cases, one shows that for any circuit from the given class, one can find an allowed substitution such that the given measure of the circuit reduces by a sufficient amount. This case analysis immediately implies an upper bound for #SAT. To~obtain worst/average case circuit complexity lower bounds one needs to present an explicit construction of a function that is a disperser/extractor for the class of sources defined by the set of substitutions under consideration. We show that many known proofs (of circuit size lower bounds and upper bounds for #SAT) fall into this framework. Using this framework, we prove the following new bounds: average case lower bounds of 3.24n and 2.59n for circuits over U_2 and B_2, respectively (though the lower bound for the basis B_2 is given for a quadratic disperser whose explicit construction is not currently known), and faster than 2^n #SAT-algorithms for circuits over U_2 and B_2 of size at most 3.24n and 2.99n, respectively. Here by B_2 we mean the set of all bivariate Boolean functions, and by U_2 the set of all bivariate Boolean functions except for parity and its complement

    Constraint solving over multi-valued logics - application to digital circuits

    Get PDF
    Due to usage conditions, hazardous environments or intentional causes, physical and virtual systems are subject to faults in their components, which may affect their overall behaviour. In a ‘black-box’ agent modelled by a set of propositional logic rules, in which just a subset of components is externally visible, such faults may only be recognised by examining some output function of the agent. A (fault-free) model of the agent’s system provides the expected output given some input. If the real output differs from that predicted output, then the system is faulty. However, some faults may only become apparent in the system output when appropriate inputs are given. A number of problems regarding both testing and diagnosis thus arise, such as testing a fault, testing the whole system, finding possible faults and differentiating them to locate the correct one. The corresponding optimisation problems of finding solutions that require minimum resources are also very relevant in industry, as is minimal diagnosis. In this dissertation we use a well established set of benchmark circuits to address such diagnostic related problems and propose and develop models with different logics that we formalise and generalise as much as possible. We also prove that all techniques generalise to agents and to multiple faults. The developed multi-valued logics extend the usual Boolean logic (suitable for faultfree models) by encoding values with some dependency (usually on faults). Such logics thus allow modelling an arbitrary number of diagnostic theories. Each problem is subsequently solved with CLP solvers that we implement and discuss, together with a new efficient search technique that we present. We compare our results with other approaches such as SAT (that require substantial duplication of circuits), showing the effectiveness of constraints over multi-valued logics, and also the adequacy of a general set constraint solver (with special inferences over set functions such as cardinality) on other problems. In addition, for an optimisation problem, we integrate local search with a constructive approach (branch-and-bound) using a variety of logics to improve an existing efficient tool based on SAT and ILP

    Quality and Quantity in Robustness-Checking Using Formal Techniques

    Get PDF
    Fault tolerance is one of the main challenges for future technology scaling to tolerate transient faults. Various techniques at design level are available to catch and handle transient faults, e.g., Triple Modular Redundancy. An important but missing step is to verify the implementation of those techniques since the implementation might be buggy itself. The thesis is focusing on formally verifying digital circuits with respect to fault-tolerant aspects. It considers transient faults and basically checks whether these faults can influence the output behavior of sequential circuits for any kind of scenarios. As a result the designer is pin-pointed directly to critical parts of the design and gets a prove about the absence of faulty behavior for non-critical parts. The focus of the verification is completeness with respect to the analysis. Three issues need to be adequately addressed: 1) cover all input stimuli, 2) all possible transient faults, and, 3) all possibly exponential long (wrt. to number of state bits) propagation paths. All three issues are addressed in different engines. A tool called RobuCheck has been implemented and evaluated on different academic benchmarks from ITC'99 and industrial benchmarks from IBM

    Computer Architectures Using Nanotechnology

    Get PDF
    • …
    corecore