10,187 research outputs found

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    Near-Capacity Turbo Coded Soft-decision Aided DAPSK/Star-QAM

    No full text
    Low-complexity non-coherently detected Differential Amplitude and Phase-Shift Keying (DAPSK) schemes constitute an ideal candidate for wireless communications. In this paper, we derive the soft-output probability formulas required for the soft-decision based demodulation of DAPSK, which are then invoked for Turbo Coded (TC) transmissions. Furthermore, the achievable throughput characteristics of the family of M-ary DAPSK schemes are provided. It is shown that the proposed 4-ring based TC assisted 64-ary DAPSK scheme achieves a coding gain of about 4.2 dBs in comparison to the identical-throughput TC assisted 64-ary Differential Phase-Shift Keying (64-DPSK) scheme at a bit error ratio of 10?5

    The reliability of single-error protected computer memories

    Get PDF
    The lifetimes of computer memories which are protected with single-error-correcting-double-error-detecting (SEC-DED) codes are studies. The authors assume that there are five possible types of memory chip failure (single-cell, row, column, row-column and whole chip), and, after making a simplifying assumption (the Poisson assumption), have substantiated that experimentally. A simple closed-form expression is derived for the system reliability function. Using this formula and chip reliability data taken from published tables, it is possible to compute the mean time to failure for realistic memory systems

    Combining Error-Correcting Codes and Decision Diagrams for the Design of Fault-Tolerant Logic

    Get PDF
    In modern logic circuits, fault-tolerance is increasingly important, since even atomic-scale imperfections can result in circuit failures as the size of the components is shrinking. Therefore, in addition to existing techniques for providing fault-tolerance to logic circuits, it is important to develop new techniques for detecting and correcting possible errors resulting from faults in the circuitry. Error-correcting codes are typically used in data transmission for error detection and correction. Their theory is far developed, and linear codes, in particular, have many useful properties and fast decoding algorithms. The existing fault-tolerance techniques utilizing error-correcting codes require less redundancy than other error detection and correction schemes, and such techniques are usually implemented using special decoding circuits. Decision diagrams are an efficient graphical representation for logic functions, which, depending on the technology, directly determine the complexity and layout of the circuit. Therefore, they are easy to implement. In this thesis, error-correcting codes are combined with decision diagrams to obtain a new method for providing fault-tolerance in logic circuits. The resulting method of designing fault-tolerant logic, namely error-correcting decision diagrams, introduces redundancy already to the representations of logic functions, and as a consequence no additional checker circuits are needed in the circuit layouts obtained with the new method. The purpose of the thesis is to introduce this original concept and provide fault-tolerance analysis for the obtained decision diagrams. The fault-tolerance analysis of error-correcting decision diagrams carried out in this thesis shows that the obtained robust diagrams have a significantly reduced probability for an incorrect output in comparison with non-redundant diagrams. However, such useful properties are not obtained without a cost, since adding redundancy also adds complexity, and consequently better error-correcting properties result in increased complexity in the circuit layout. /Kir1

    Non-Parametric Calibration of Probabilistic Regression

    Full text link
    The task of calibration is to retrospectively adjust the outputs from a machine learning model to provide better probability estimates on the target variable. While calibration has been investigated thoroughly in classification, it has not yet been well-established for regression tasks. This paper considers the problem of calibrating a probabilistic regression model to improve the estimated probability densities over the real-valued targets. We propose to calibrate a regression model through the cumulative probability density, which can be derived from calibrating a multi-class classifier. We provide three non-parametric approaches to solve the problem, two of which provide empirical estimates and the third providing smooth density estimates. The proposed approaches are experimentally evaluated to show their ability to improve the performance of regression models on the predictive likelihood

    The potential of neuroeconomics

    Get PDF
    The goal of neuroeconomics is a mathematical theory of how the brain implements decisions, that is tied to behaviour. This theory is likely to show some decisions for which rational-choice theory is a good approximation (particularly for evolutionarily sculpted or highly learned choices), to provide a deeper level of distinction among competing behavioural alternatives, and to provide empirical inspiration for economics to incorporate more nuanced ideas about endogeneity of preferences, individual difference, emotions, endogeneous regulation of states, and so forth. I also address some concerns about rhetoric and practical epistemology. Neuroscience articles are necessarily speculative and the science has proceeded rapidly because of that rhetorical convention. Single-study papers are encouraged and are necessarily limited in what can be inferred, so the sturdiest cumulation of results, and the best guide forward, comes in review journals which compile results and suggest themes. The potential of neuroeconomics is in combining the clearest experimental paradigms and statistical methods in economics, with the unprecedented capacity to measure a range of neural and cognitive activity that economists like Edgeworth, Fisher and Ramsey daydreamed about but did not have
    corecore