4,170 research outputs found

    Combining Error-Correcting Codes and Decision Diagrams for the Design of Fault-Tolerant Logic

    Get PDF
    In modern logic circuits, fault-tolerance is increasingly important, since even atomic-scale imperfections can result in circuit failures as the size of the components is shrinking. Therefore, in addition to existing techniques for providing fault-tolerance to logic circuits, it is important to develop new techniques for detecting and correcting possible errors resulting from faults in the circuitry. Error-correcting codes are typically used in data transmission for error detection and correction. Their theory is far developed, and linear codes, in particular, have many useful properties and fast decoding algorithms. The existing fault-tolerance techniques utilizing error-correcting codes require less redundancy than other error detection and correction schemes, and such techniques are usually implemented using special decoding circuits. Decision diagrams are an efficient graphical representation for logic functions, which, depending on the technology, directly determine the complexity and layout of the circuit. Therefore, they are easy to implement. In this thesis, error-correcting codes are combined with decision diagrams to obtain a new method for providing fault-tolerance in logic circuits. The resulting method of designing fault-tolerant logic, namely error-correcting decision diagrams, introduces redundancy already to the representations of logic functions, and as a consequence no additional checker circuits are needed in the circuit layouts obtained with the new method. The purpose of the thesis is to introduce this original concept and provide fault-tolerance analysis for the obtained decision diagrams. The fault-tolerance analysis of error-correcting decision diagrams carried out in this thesis shows that the obtained robust diagrams have a significantly reduced probability for an incorrect output in comparison with non-redundant diagrams. However, such useful properties are not obtained without a cost, since adding redundancy also adds complexity, and consequently better error-correcting properties result in increased complexity in the circuit layout. /Kir1

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    EXIT charts for system design and analysis

    No full text
    Near-capacity performance may be achieved with the aid of iterative decoding, where extrinsic soft information is exchanged between the constituent decoders in order to improve the attainable system performance. Extrinsic information Transfer (EXIT) charts constitute a powerful semi-analytical tool used for analysing and designing iteratively decoded systems. In this tutorial, we commence by providing a rudimentary overview of the iterative decoding principle and the concept of soft information exchange. We then elaborate on the concept of EXIT charts using three iteratively decoded prototype systems as design examples. We conclude by illustrating further applications of EXIT charts, including near-capacity designs, the concept of irregular codes and the design of modulation schemes

    Quantum Approaches to Data Science and Data Analytics

    Get PDF
    In this thesis are explored different research directions related to both the use of classical data analysis techniques for the study of quantum systems and the employment of quantum computing to speed up hard Machine Learning task

    Quasi-Perfect Lee Codes of Radius 2 and Arbitrarily Large Dimension

    Get PDF
    A construction of two-quasi-perfect Lee codes is given over the space ?np for p prime, p ? ±5 (mod 12), and n = 2[p/4]. It is known that there are infinitely many such primes. Golomb and Welch conjectured that perfect codes for the Lee metric do not exist for dimension n ? 3 and radius r ? 2. This conjecture was proved to be true for large radii as well as for low dimensions. The codes found are very close to be perfect, which exhibits the hardness of the conjecture. A series of computations show that related graphs are Ramanujan, which could provide further connections between coding and graph theories

    Tree-structured multiclass probability estimators

    Get PDF
    Nested dichotomies are used as a method of transforming a multiclass classification problem into a series of binary problems. A binary tree structure is constructed over the label space that recursively splits the set of classes into subsets, and a binary classification model learns to discriminate between the two subsets of classes at each node. Several distinct nested dichotomy structures can be built in an ensemble for superior performance. In this thesis, we introduce two new methods for constructing more accurate nested dichotomies. Random-pair selection is a subset selection method that aims to group similar classes together in a non-deterministic fashion to easily enable the construction of accurate ensembles. Multiple subset evaluation takes this, and other subset selection methods, further by evaluating several different splits and choosing the best performing one. Finally, we also discuss the calibration of the probability estimates produced by nested dichotomies. We observe that nested dichotomies systematically produce under-confident predictions, even if the binary classifiers are well calibrated, and especially when the number of classes is high. Furthermore, substantial performance gains can be made when probability calibration methods are also applied to the internal models
    corecore