The Word-Level Models for Efficient Computation of Multiple-Valued Functions. PART 2: LWL Based Model

Abstract

This paper is a continuation of the study of Neural-Like Networks (NLNs) for computation of Multiple-Valued Logic (MVL) functions. NLN is defined as a feedforward network with no learning. In contrast to classical neural network with Threshold Gates (TGs), the proposed NLN is built of so-called Neuron-Like Gates (NLGs). It was shown in our previous study that NLG is modelled by a Linear Arithmetical expression (LAR). In this paper we show even more simple NLG model. We have developed two word-level models, Linear Weighted Logic expressions (LWLs) and a corresponding set of Linear Decision Diagrams (LDDs). We compare the LWL- and LAR-based NLNs. The experimental study on large MVL circuits shows that the number of nodes in the LDDs derived from LWLs is four times less in average compared to those derived from LARs. They are also 2-7 times more compact (require less memory to store the terminal values).

    Similar works

    Full text

    thumbnail-image

    Available Versions