1,984 research outputs found

    NP-hardness of circuit minimization for multi-output functions

    Get PDF
    Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive. In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators. Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions

    On the size of PLA's required to realize binary and multiple-valued functions

    Get PDF
    This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. As such, it is in the public domain, and under the provisions of Title 17, United States Code, Section 105, may not be copyrighted.IEEE Transactions on Computers, C-38, Jan. 1989, pp. 82-98, 1988While the use of programmable logic arrays in modern logic design is common, little is known about what PLA size provides reasonable coverage in typical applications. We address this question by showing upper and lower bounds on the average number of product terms required in the minimal realization of binary and multiple-valued functions as a function of the number of nonzero output values. When the number of such values is small, the bounds are nearly the same, and accurate values for the average are obtained. In addition, an upper bound is derived for the variance of the distribution of the number of product terms required in minimal realizations of binary functions. When the number of nonzero values is small, we find that the variance is small, and it follows that most functions require nearly the average number of product terms. The variance, in addition to the upper and lower bounds, allow conclusions to be made about how PLA size determines the set of realizable functions. Although the bounds are most accurate when there are few nonzero values, they are adequate for analyzing commercially available PLA’s, which we do in this paper. Most such PLA’s are small enough that our results can be applied. For example, when the number of nonzero values exceeds some threshold uT, determined by the PLA size, only a small fraction of the functions can be realized. Our analysis shows that for all but one commercially available PLA, the number of nonzero values is a statistically meaningful criteria for determining whether or not a given function is likely to be realized

    Control Augmented Structural Synthesis

    Get PDF
    A methodology for control augmented structural synthesis is proposed for a class of structures which can be modeled as an assemblage of frame and/or truss elements. It is assumed that both the plant (structure) and the active control system dynamics can be adequately represented with a linear model. The structural sizing variables, active control system feedback gains and nonstructural lumped masses are treated simultaneously as independent design variables. Design constraints are imposed on static and dynamic displacements, static stresses, actuator forces and natural frequencies to ensure acceptable system behavior. Multiple static and dynamic loading conditions are considered. Side constraints imposed on the design variables protect against the generation of unrealizable designs. While the proposed approach is fundamentally more general, here the methodology is developed and demonstrated for the case where: (1) the dynamic loading is harmonic and thus the steady state response is of primary interest; (2) direct output feedback is used for the control system model; and (3) the actuators and sensors are collocated

    Heuristics for the refinement of assumptions in generalized reactivity formulae

    Get PDF
    Reactive synthesis is concerned with automatically generating implementations from formal specifications. These specifications are typically written in the language of generalized reactivity (GR(1)), a subset of linear temporal logic capable of expressing the most common industrial specification patterns, and describe the requirements about the behavior of a system under assumptions about the environment where the system is to be deployed. Oftentimes no implementation exists which guarantees the required behavior under all possible environments, typically due to missing assumptions (this is usually referred to as unrealizability). To address this issue, new assumptions need to be added to complete the specification, a problem known as assumptions refinement. Since the space of candidate assumptions is intractably large, searching for the best solutions is inherently hard. In particular, new methods are needed to (i) increase the effectiveness of the search procedures, measured as the ratio between the number of solutions found and of refinements explored; and (ii) improve the results' quality, defined as the weakness of the solutions. In this thesis we propose a set of heuristics to meet these goals, and a methodology to assess and compare assumptions refinement methods based on quantitative metrics. The heuristics are in the form of algorithms to generate candidate refinements during the search, and quantitative measures to assess the quality of the candidates. We first discuss a heuristic method to generate assumptions that target the cause of unrealizability. This is done by selecting candidate refinement formulas based on Craig's interpolation. We provide a formal underpinning of the technique and evaluate it in terms of our new metric of effectiveness, as defined above, whose value is improved with respect to the state of the art. We demonstrate this on a set of popular benchmarks of embedded software. We then provide a formal, quantitative characterization of the permissiveness of environment assumptions in the form of a weakness measure. We prove that the partial order induced by this measure is consistent with the one induced by implication. The key advantage of this measure is that it allows for prioritizing candidate solutions, as we show experimentally. Lastly, we propose a notion of minimal refinements with respect to the observed counterstrategies. We demonstrate that exploring minimal refinements produces weaker solutions, and reduces the amount of computations needed to explore each refinement. However, this may come at the cost of reducing the effectiveness of the search. To counteract this effect, we propose a hybrid search approach in which both minimal and non-minimal refinements are explored.Open Acces

    Quantum adiabatic machine learning

    Full text link
    We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. We apply and illustrate this approach in detail to the problem of software verification and validation.Comment: 21 pages, 9 figure

    Comparison of the Worst and Best Sum-of-Products Expressions for Multiple-Valued Functions

    Get PDF
    Because most practical logic design algorithms produce irredundant sum-of-products (ISOP) expressions, the understanding of ISOPs is crucial. We show a class of functions for which Morreale-Minato's ISOP generation algorithm produces worst ISOPs (WSOP), ISOPs with the most product terms. We show this class has the property that the ratio of the number of products in the WSOP to the number in the minimum ISOP (MSOP) is arbitrarily large when the number of variables is unbounded. The ramifications of this are significant; care must be exercised in designing algorithms that produce ISOPs. We also show that 2/sup n-1/ is a firm upper bound on the number of product terms in any ISOP for switching functions on n variables, answering a question that has been open for 30 years. We show experimental data and extend our results to functions of multiple-valued variables

    Modeling and Recognition of Smart Grid Faults by a Combined Approach of Dissimilarity Learning and One-Class Classification

    Full text link
    Detecting faults in electrical power grids is of paramount importance, either from the electricity operator and consumer viewpoints. Modern electric power grids (smart grids) are equipped with smart sensors that allow to gather real-time information regarding the physical status of all the component elements belonging to the whole infrastructure (e.g., cables and related insulation, transformers, breakers and so on). In real-world smart grid systems, usually, additional information that are related to the operational status of the grid itself are collected such as meteorological information. Designing a suitable recognition (discrimination) model of faults in a real-world smart grid system is hence a challenging task. This follows from the heterogeneity of the information that actually determine a typical fault condition. The second point is that, for synthesizing a recognition model, in practice only the conditions of observed faults are usually meaningful. Therefore, a suitable recognition model should be synthesized by making use of the observed fault conditions only. In this paper, we deal with the problem of modeling and recognizing faults in a real-world smart grid system, which supplies the entire city of Rome, Italy. Recognition of faults is addressed by following a combined approach of multiple dissimilarity measures customization and one-class classification techniques. We provide here an in-depth study related to the available data and to the models synthesized by the proposed one-class classifier. We offer also a comprehensive analysis of the fault recognition results by exploiting a fuzzy set based reliability decision rule
    • …
    corecore