1,910 research outputs found
Analysis of minimization algorithms for multiple-valued programmable logic arrays
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. As such, it is in the public domain, and under the provisions of Title 17, United States Code, Section 105, may not be copyrighted.Proceedings of the 18th International Symposium on Multiple-Valued Logic, May 1988, pp. 226-236We compare the performance of three heuristic algorithms [3,6,13] for the minimization of
sum-of-products expressions realized by the newly
developed multiplevalued programmable logic arrays [9]. Heuristic methods are important because exact minimization is extremely time consuming. We compare the heuristics to the exact solution, showing that heuristic methods are reasonably close to minimal. We use as a basis of comparison the average number of product terms over a set of randomly generated functions. All three heuristics produce nearly the same average number of product terms. Although the averages are close, there is surprisingly little overlap among the set of functions where the best realization is achieved. Thus, there is a benefit to applying different heuristics and then choosing the best realization
A heat quench algorithm for the minimization of multiple-valued programmable logic arrays
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. As such, it is in the public domain, and under the provisions of Title 17, United States Code, Section 105, may not be copyrighted.Computer and Electrical Engineering Journal, Vol. 22, No. 2, 1996, pp. 103-107, 1996imulated annealing has been used extensively to solve combinatorial problems. Although
it does not guarantee optimum results, results are often optimum or near optimum. The primary
disadvantage is slow speed. It has been suggested [1] that quenching (rapid cooling) yields results that are
far from optimum. We challenge this perception by showing a context in which quenching yields good
solutions with good computation speeds. In this paper, we present an algorithm in which quenching is
combined with rapid heating. We have successfully applied this algorithm to the multiple-valued logic
minimization problem. Our results suggest that this algorithm holds promise for problems where moves
exist that leave the cost of the current solution unchanged.
Key words: Multiple-valued logic, logic minimization, simulated annealing, heat quench, heuristic
Canonical multi-valued input Reed-Muller trees and forms
There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions
On the size of PLA's required to realize binary and multiple-valued functions
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. As such, it is in the public domain, and under the provisions of Title 17, United States Code, Section 105, may not be copyrighted.IEEE Transactions on Computers, C-38, Jan. 1989, pp. 82-98, 1988While the use of programmable logic arrays in
modern logic design is common, little is known about what PLA
size provides reasonable coverage in typical applications. We
address this question by showing upper and lower bounds on the
average number of product terms required in the minimal
realization of binary and multiple-valued functions as a function
of the number of nonzero output values. When the number of
such values is small, the bounds are nearly the same, and accurate
values for the average are obtained.
In addition, an upper bound is derived for the variance of the
distribution of the number of product terms required in minimal
realizations of binary functions. When the number of nonzero
values is small, we find that the variance is small, and it follows
that most functions require nearly the average number of product
terms.
The variance, in addition to the upper and lower bounds, allow
conclusions to be made about how PLA size determines the set of
realizable functions. Although the bounds are most accurate
when there are few nonzero values, they are adequate for
analyzing commercially available PLA’s, which we do in this
paper. Most such PLA’s are small enough that our results can be
applied. For example, when the number of nonzero values
exceeds some threshold uT, determined by the PLA size, only a
small fraction of the functions can be realized. Our analysis
shows that for all but one commercially available PLA, the
number of nonzero values is a statistically meaningful criteria for
determining whether or not a given function is likely to be
realized
A neighborhood decoupling algorithm for truncated sum minimization
The article of record as published may be found at http://dx.doi.org/10.1109/ISMVL.1990.122611Published in: Proceedings of the Twentieth International Symposium on Multiple-Valued LogicThere has been considerable interest in heuristic
method for minimizing multiple-valued logic functions because exact methods are intractable. This
paper describes a new heuristic, called the neighborhood decoupling (ND) algorithm. It first selects
a minterm and then selects an implicant, a two step
process employed in previous heuristics, e.g., Besslich
[2] and Dueck and Miller [4]. The approach taken
here more closely resembles the Dueck and Miller
heuristic; however, it makes more efficient use of
minterms truncated to the highest logic value. The
ND-algorithm was developed in conjunction with HAMLET [12], a computer software created at the Naval
Postgraduate School for the purpose of designing
heuristics for multiple-valued logic minimization. In
this paper, we present the algorithm, discuss the
implementation, show that it performs consistently
better than others and explain the reason for its improved performance
MFPA: Mixed-Signal Field Programmable Array for Energy-Aware Compressive Signal Processing
Compressive Sensing (CS) is a signal processing technique which reduces the number of samples taken per frame to decrease energy, storage, and data transmission overheads, as well as reducing time taken for data acquisition in time-critical applications. The tradeoff in such an approach is increased complexity of signal reconstruction. While several algorithms have been developed for CS signal reconstruction, hardware implementation of these algorithms is still an area of active research. Prior work has sought to utilize parallelism available in reconstruction algorithms to minimize hardware overheads; however, such approaches are limited by the underlying limitations in CMOS technology. Herein, the MFPA (Mixed-signal Field Programmable Array) approach is presented as a hybrid spin-CMOS reconfigurable fabric specifically designed for implementation of CS data sampling and signal reconstruction. The resulting fabric consists of 1) slice-organized analog blocks providing amplifiers, transistors, capacitors, and Magnetic Tunnel Junctions (MTJs) which are configurable to achieving square/square root operations required for calculating vector norms, 2) digital functional blocks which feature 6-input clockless lookup tables for computation of matrix inverse, and 3) an MRAM-based nonvolatile crossbar array for carrying out low-energy matrix-vector multiplication operations. The various functional blocks are connected via a global interconnect and spin-based analog-to-digital converters. Simulation results demonstrate significant energy and area benefits compared to equivalent CMOS digital implementations for each of the functional blocks used: this includes an 80% reduction in energy and 97% reduction in transistor count for the nonvolatile crossbar array, 80% standby power reduction and 25% reduced area footprint for the clockless lookup tables, and roughly 97% reduction in transistor count for a multiplier built using components from the analog blocks. Moreover, the proposed fabric yields 77% energy reduction compared to CMOS when used to implement CS reconstruction, in addition to latency improvements
Decomposition tool targeting FPGA architectures
The growing interest in the field of logic synthesis targeting Field Programmable Gate Arrays (FPGA) and the active research carried out by a number of research groups in the area of functional decomposition is the prime motivation for this thesis. Logic synthesis has been an area of interest in many universities all over the world. The work involves the study and implementation of techniques and methods in logic synthesis. In this work, a logic synthesis tool has been developed implementing the aspects of general and complete Decomposition method based on functional decomposition techniques [4]. The tool is aimed at producing outputs faster and more efficient than the available software. C++ Standard template library is used to develop this tool. The output of this tool is designed to be compatible with the available vendor software. The tool has been tested on MCNC benchmarks and those created keeping in mind the industry requirements
- …