3,408 research outputs found
Advances in Functional Decomposition: Theory and Applications
Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing.
This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient.
The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering.
The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time.
The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function.
The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches.
Finally we present two publications that resulted from the many detours we have taken along the winding path of our research
Support-reducing decomposition for FPGA mapping
Decomposition is a technology-independent process, in which a large complex function is broken into smaller, less complex functions. The costs of two-level or factored-form representations (cubes and literals) are used in most decomposition methods, as they have a high correlation with the area of cell-based designs. However, this correlation is weaker for field-programmable gate arrays (FPGAs) based on look-up tables. Furthermore, local optimizations have limited power due to the structural bias of the circuit descriptions. This paper tries to reduce the structural biasing by remapping the LUT network and decomposing the derived functions using the support as cost function. The proposed method improves the FPGA mapping results of a commercial tool for the 20 largest MCNC benchmarks, with gains of 28% in delay plus 18% in area when targeting delay, and a reduction of 28% in area plus 14% in delay with area as cost function. Results with 23% less area and 6% less delay are obtained after physical synthesis (post place-and-route). Moreover, 12 of the best known results for delay (and 3 for area) of the EPFL benchmarks are improved.Peer ReviewedPostprint (author's final draft
Boolean decomposition using two-literal divisors
This paper is an attempt to answer the following question: how much improvement can be obtained in logic decomposition by using Boolean divisors? Traditionally, the existence of too many Boolean divisors has been the main reason why Boolean decomposition has had limited success. This paper explores a new strategy based on the decomposition of Boolean functions by means of two-literal divisors. The strategy is shown to derive superior results while still maintaining an affordable complexity. The results show improvements of 15% on average, and up to 50% in some examples, w.r.t. algebraic decomposition.Peer ReviewedPostprint (published version
Implicit complexity for coinductive data: a characterization of corecurrence
We propose a framework for reasoning about programs that manipulate
coinductive data as well as inductive data. Our approach is based on using
equational programs, which support a seamless combination of computation and
reasoning, and using productivity (fairness) as the fundamental assertion,
rather than bi-simulation. The latter is expressible in terms of the former. As
an application to this framework, we give an implicit characterization of
corecurrence: a function is definable using corecurrence iff its productivity
is provable using coinduction for formulas in which data-predicates do not
occur negatively. This is an analog, albeit in weaker form, of a
characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar
induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034
Proto-Plasm: parallel language for adaptive and scalable modelling of biosystems
This paper discusses the design goals and the first developments of
Proto-Plasm, a novel computational environment to produce libraries
of executable, combinable and customizable computer models of natural and
synthetic biosystems, aiming to provide a supporting framework for predictive
understanding of structure and behaviour through multiscale geometric modelling
and multiphysics simulations. Admittedly, the Proto-Plasm platform is
still in its infancy. Its computational framework—language, model library,
integrated development environment and parallel engine—intends to provide
patient-specific computational modelling and simulation of organs and biosystem,
exploiting novel functionalities resulting from the symbolic combination of
parametrized models of parts at various scales. Proto-Plasm may define
the model equations, but it is currently focused on the symbolic description of
model geometry and on the parallel support of simulations. Conversely, CellML
and SBML could be viewed as defining the behavioural functions (the model
equations) to be used within a Proto-Plasm program. Here we exemplify
the basic functionalities of Proto-Plasm, by constructing a schematic
heart model. We also discuss multiscale issues with reference to the geometric
and physical modelling of neuromuscular junctions
Information Physics: The New Frontier
At this point in time, two major areas of physics, statistical mechanics and
quantum mechanics, rest on the foundations of probability and entropy. The last
century saw several significant fundamental advances in our understanding of
the process of inference, which make it clear that these are inferential
theories. That is, rather than being a description of the behavior of the
universe, these theories describe how observers can make optimal predictions
about the universe. In such a picture, information plays a critical role. What
is more is that little clues, such as the fact that black holes have entropy,
continue to suggest that information is fundamental to physics in general.
In the last decade, our fundamental understanding of probability theory has
led to a Bayesian revolution. In addition, we have come to recognize that the
foundations go far deeper and that Cox's approach of generalizing a Boolean
algebra to a probability calculus is the first specific example of the more
fundamental idea of assigning valuations to partially-ordered sets. By
considering this as a natural way to introduce quantification to the more
fundamental notion of ordering, one obtains an entirely new way of deriving
physical laws. I will introduce this new way of thinking by demonstrating how
one can quantify partially-ordered sets and, in the process, derive physical
laws. The implication is that physical law does not reflect the order in the
universe, instead it is derived from the order imposed by our description of
the universe. Information physics, which is based on understanding the ways in
which we both quantify and process information about the world around us, is a
fundamentally new approach to science.Comment: 17 pages, 6 figures. Knuth K.H. 2010. Information physics: The new
frontier. J.-F. Bercher, P. Bessi\`ere, and A. Mohammad-Djafari (eds.)
Bayesian Inference and Maximum Entropy Methods in Science and Engineering
(MaxEnt 2010), Chamonix, France, July 201
An Algebra of Synchronous Scheduling Interfaces
In this paper we propose an algebra of synchronous scheduling interfaces
which combines the expressiveness of Boolean algebra for logical and functional
behaviour with the min-max-plus arithmetic for quantifying the non-functional
aspects of synchronous interfaces. The interface theory arises from a
realisability interpretation of intuitionistic modal logic (also known as
Curry-Howard-Isomorphism or propositions-as-types principle). The resulting
algebra of interface types aims to provide a general setting for specifying
type-directed and compositional analyses of worst-case scheduling bounds. It
covers synchronous control flow under concurrent, multi-processing or
multi-threading execution and permits precise statements about exactness and
coverage of the analyses supporting a variety of abstractions. The paper
illustrates the expressiveness of the algebra by way of some examples taken
from network flow problems, shortest-path, task scheduling and worst-case
reaction times in synchronous programming.Comment: In Proceedings FIT 2010, arXiv:1101.426
- …