7 research outputs found

    Fast Hierarchical NPN Classification

    Get PDF
    Classifying functions according to some common properties into libraries of functions is an important step in many logic synthesis and technology mapping algorithms used in FPGA design flows. NPN classification is one of the frequently used classifications. Existing algorithms for NPN classification perform a sequence of steps to derive the resulting NPN class, but discard the intermediate results produced at the end of each step. The hierarchical method introduced in this paper uses the same sequence of steps, but it saves the intermediate results at each step and reuses them when classifying other functions. It is, on average, 3.7 times faster compared to a state-of-the-art non- hierarchical method, at the cost of a modest increase in memory needed to save the class hierarchy. The hierarchical approach enables a rapid exact NPN classification for functions up to 10 inputs—it exactly classifies one million 6-input functions in the same time as the heuristic state-of-the-art algorithm

    Active classification with comparison queries

    Full text link
    We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size nn using approximately O(log⁥n)O(\log n) queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, Ω(n)\Omega(n) queries are required. Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by O(1)O(1) examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.Comment: 23 pages (not including references), 1 figure. The new version contains a minor fix in the proof of Lemma 4.

    Heuristic NPN classification for large functions using AIGs and LEXSAT

    Get PDF
    Two Boolean functions are NPN equivalent if one can be ob- tained from the other by negating inputs, permuting inputs, or negating the output. NPN equivalence is an equivalence relation and the number of equivalence classes is significantly smaller than the number of all Boolean functions. This property has been exploited successfully to in- crease the efficiency of various logic synthesis algorithms. Since computing the NPN representative of a Boolean function is not scalable, heuristics have been proposed that are not guaranteed to find the representative for all functions. So far, these heuristics have been implemented using the function’s truth table representation, and therefore do not scale for functions exceeding 16 variables. In this paper, we present a symbolic heuristic NPN classification using And-Inverter Graphs and Boolean satisfiability techniques. This allows us to heuristically compute NPN representatives for functions with much larger number of variables; our experiments contain benchmarks with up to 194 variables. A key technique of the symbolic implementation is SAT-based procedure LEXSAT, which finds the lexicographically smallest satisfiable assignment. To our knowledge, LEXSAT has never been used before in logic synthesis algorithms

    Linear separability of the vertices of an n-dimensional hypercube.

    Get PDF
    No abstract available.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b131703
    corecore