111 research outputs found

    Phased burst error-correcting array codes

    Get PDF
    Various aspects of single-phased burst-error-correcting array codes are explored. These codes are composed of two-dimensional arrays with row and column parities with a diagonally cyclic readout order; they are capable of correcting a single burst error along one diagonal. Optimal codeword sizes are found to have dimensions n1×n2 such that n2 is the smallest prime number larger than n1. These codes are capable of reaching the Singleton bound. A new type of error, approximate errors, is defined; in q-ary applications, these errors cause data to be slightly corrupted and therefore still close to the true data level. Phased burst array codes can be tailored to correct these codes with even higher rates than befor

    On-Chip ECC for Multi-Level Random Access Memories

    Get PDF
    In this talk we investigate a number of on-chip coding techniques for the protection of Random Access Memories which use multi-level as opposed to binary storage cells. The motivation for such RAM cells is of course the storage of several bits per cell as opposed to one bit per cell [l]. Since the typical number of levels which a multi-level RAM can handle is 16 (the cell being based on a standard DRAM cell which has varying amounts of voltage stored on it) there are four bits recorded into each cell [2]. The disadvantage of multi-level RAMs is that they are much more prone to errors, and so on-chip ECC is essential for reliable operation. There are essentially three reasons for error control coding in multi-level RAMs: To correct soft errors, to correct hard errors, and to correct read errors. The source of these errors is, respectively, alpha particle radiation, hardware faults, and data level ambiguities. On-chip error correction can be used to increase the mean life before failure for all three types of errors. Coding schemes can be both bitwise and cellwise. Bitwise schemes include simple parity checks and SEC-DED codes, either by themselves or as product codes [3]. Data organization should allow for burst error correction, since alpha particles can wipe out all four bits in a single cell, and for dense memory chips, data in surrounding cells as well. This latter effect becomes more serious as feature sizes are scaled, and a single alpha particle hit affects many adjacent cells. Burst codes such as those in [4] can be used to correct for these errors. Bitwise coding schemes are more efficient in correcting read errors, since they can correct single bit errors and allow the remaining error correction power to be used elsewhere. Read errors essentially affect one bit only, since the use of Grey codes for encoding the bits into the memory cells ensures that at most one bit is flipped with each successive change in level. Cellwise schemes include Reed-Solomon codes, hexadecimal codes, and product codes. However, simple encoding and decoding algorithms are necessary, since excessive space taken by powerful but complex encoding/decoding circuits can be offset by having more parity cells and using simpler codes. These coding techniques are more useful for correcting hard and soft errors which affect the entire cell. They tend to be more complex, and they are not as efficient in correcting read errors as the bitwise codes. In the talk we will investigate the suitability and performance of various multi-level RAM coding schemes, such as row-column codes, burst codes, hexadecimal codes, Reed-Solomon codes, concatenated codes, and some new majority-logic decodable codes. In particular we investigate their tolerance to soft errors, and to feature size scaling

    A digital neural network architecture using random pulse trains

    Get PDF
    A digital neural network architecture generating and processing random pulse trains, along with its unique advantages over existing comparable systems is described. In addition, test results from the VLSI implementation of its multiplication scheme are presented. These indicate that the implementation performs robustly and accurately

    A digital neural network architecture using random pulse trains

    Get PDF
    A digital neural network architecture generating and processing random pulse trains, along with its unique advantages over existing comparable systems is described. In addition, test results from the VLSI implementation of its multiplication scheme are presented. These indicate that the implementation performs robustly and accurately

    Analog VLSI implementation for stereo correspondence between 2-D images

    Get PDF
    Many robotics and navigation systems utilizing stereopsis to determine depth have rigid size and power constraints and require direct physical implementation of the stereo algorithm. The main challenges lie in managing the communication between image sensor and image processor arrays, and in parallelizing the computation to determine stereo correspondence between image pixels in real-time. This paper describes the first comprehensive system level demonstration of a dedicated low-power analog VLSI (very large scale integration) architecture for stereo correspondence suitable for real-time implementation. The inputs to the implemented chip are the ordered pixels from a stereo image pair, and the output is a two-dimensional disparity map. The approach combines biologically inspired silicon modeling with the necessary interfacing options for a complete practical solution that can be built with currently available technology in a compact package. Furthermore, the strategy employed considers multiple factors that may degrade performance, including the spatial correlations in images and the inherent accuracy limitations of analog hardware, and augments the design with countermeasures

    A learning algorithm for multi-layer perceptrons with hard-limiting threshold units

    Get PDF
    We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-limiting units. The learning scheme is based on the standard backpropagation, but with "pseudo-gradient" descent, which uses the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function. A justification that the pseudo-gradient always points in the right down hill direction in error surface for networks with one hidden layer is provided. The advantages of such networks are that their internal representations in the hidden layers are clearly interpretable, and well-defined classification rules can be easily obtained, that calculations for classifications after training are very simple, and that they are easily implementable in hardware. Comparative experimental results on several benchmark problems using both the conventional backpropagation networks and our learning scheme for multilayer perceptrons are presented and analyzed

    Decision tree design from a communication theory standpoint

    Get PDF
    A communication theory approach to decision tree design based on a top-town mutual information algorithm is presented. It is shown that this algorithm is equivalent to a form of Shannon-Fano prefix coding, and several fundamental bounds relating decision-tree parameters are derived. The bounds are used in conjunction with a rate-distortion interpretation of tree design to explain several phenomena previously observed in practical decision-tree design. A termination rule for the algorithm called the delta-entropy rule is proposed that improves its robustness in the presence of noise. Simulation results are presented, showing that the tree classifiers derived by the algorithm compare favourably to the single nearest neighbour classifier

    An information theoretic approach to rule induction from databases

    Get PDF
    The knowledge acquisition bottleneck in obtaining rules directly from an expert is well known. Hence, the problem of automated rule acquisition from data is a well-motivated one, particularly for domains where a database of sample data exists. In this paper we introduce a novel algorithm for the induction of rules from examples. The algorithm is novel in the sense that it not only learns rules for a given concept (classification), but it simultaneously learns rules relating multiple concepts. This type of learning, known as generalized rule induction is considerably more general than existing algorithms which tend to be classification oriented. Initially we focus on the problem of determining a quantitative, well-defined rule preference measure. In particular, we propose a quantity called the J-measure as an information theoretic alternative to existing approaches. The J-measure quantifies the information content of a rule or a hypothesis. We will outline the information theoretic origins of this measure and examine its plausibility as a hypothesis preference measure. We then define the ITRULE algorithm which uses the newly proposed measure to learn a set of optimal rules from a set of data samples, and we conclude the paper with an analysis of experimental results on real-world data

    Incremental Rule-based Learning

    Get PDF
    In a system which learns to predict the value of an output variable given one or more input variables by looking at a set of examples, a rule-based knowledge representation provides not only a natural method of constructing a classifier, but also a human-readable explanation of what has been learned. Consider a rule of the form if y then x where y is a conjunction of values of input variables and x is a value of the output variable. The number of input variables in y is called the order of the rule. In previous work, a measure of the information content or "value" of such a rule has been developed (the J-measure. It has been shown in [3] that a classifier can be built from the rules obtained by a constrained search of all possible rules which performs comparably with other classifiers
    corecore