1,295 research outputs found

    New Views for Stochastic Computing: From Time-Encoding to Deterministic Processing

    Get PDF
    University of Minnesota Ph.D. dissertation.July 2018. Major: Electrical/Computer Engineering. Advisor: David Lilja. 1 computer file (PDF); xi, 149 pages.Stochastic computing (SC), a paradigm first introduced in the 1960s, has received considerable attention in recent years as a potential paradigm for emerging technologies and ''post-CMOS'' computing. Logical computation is performed on random bitstreams where the signal value is encoded by the probability of obtaining a one versus a zero. This unconventional representation of data offers some intriguing advantages over conventional weighted binary. Implementing complex functions with simple hardware (e.g., multiplication using a single AND gate), tolerating soft errors (i.e., bit flips), and progressive precision are the primary advantages of SC. The obvious disadvantage, however, is latency. A stochastic representation is exponentially longer than conventional binary radix. Long latencies translate into high energy consumption, often higher than that of their binary counterpart. Generating bit streams is also costly. Factoring in the cost of the bit-stream generators, the overall hardware cost of an SC implementation is often comparable to a conventional binary implementation. This dissertation begins by proposing a highly unorthodox idea: performing computation with digital constructs on time-encoded analog signals. We introduce a new, energy-efficient, high-performance, and much less costly approach for SC using time-encoded pulse signals. We explore the design and implementation of arithmetic operations on time-encoded data and discuss the advantages, challenges, and potential applications. Experimental results on image processing applications show up to 99% performance speedup, 98% saving in energy dissipation, and 40% area reduction compared to prior stochastic implementations. We further introduce a low-cost approach for synthesizing sorting network circuits based on deterministic unary bit-streams. Synthesis results show more than 90% area and power savings compared to the costs of the conventional binary implementation. Time-based encoding of data is then exploited for fast and energy-efficient processing of data with the developed sorting circuits. Poor progressive precision is the main challenge with the recently developed deterministic methods of SC. We propose a high-quality down-sampling method which significantly improves the processing time and the energy consumption of these deterministic methods by pseudo-randomizing bitstreams. We also propose two novel deterministic methods of processing bitstreams by using low-discrepancy sequences. We further introduce a new advantage to SC paradigm-the skew tolerance of SC circuits. We exploit this advantage in developing polysynchronous clocking, a design strategy for optimizing the clock distribution network of SC systems. Finally, as the first study of its kind to the best of our knowledge, we rethink the memory system design for SC. We propose a seamless stochastic system, StochMem, which features analog memory to trade the energy and area overhead of data conversion for computation accuracy

    A 28-nm CMOS 1 V 3.5 GS/s 6-bit DAC with signal-independent delta-I noise DfT scheme

    Get PDF
    This paper presents a 3.5GSps 6-bit current-steering DAC with auxiliary circuitry to assist testing in a 1V digital 28nm CMOS process. The DAC uses only thin-oxide transistors and occupies 0.035mm2, making it suitable to embedding in VLSI systems, e.g. FPGA. To cope with the IC process variability, a unit element approach is generally employed. The 3 MSBs are implemented as 7 unary D/A cells and the 3 LSBs as 3 binary D/A cells, using appropriately reduced number of unit elements. Furthermore, all digital gates only make use of two basic unit blocks: a buffer and a multiplexer. For testing, a memory block of 5kbits is placed on-chip, which is externally loaded in a serial way but internally read in an 8x time-interleaved way. The memory is organized around 48 clocked 104-bit shift-registers. It keeps the resulting switching disturbances signal-independent and hence avoids inducing output non-linearity errors, even when a common power supply is shared with the DAC. This novelty allows reliable testing of the DAC core, while avoiding performance limitation risks of handling high-speed off-chip data streams. The DAC SFDR>40dB bandwidth is 0.8GHz, while the IM

    Pseudo-Boolean Constraint Encodings for Conjunctive Normal Form and their Applications

    Get PDF
    In contrast to a single clause a pseudo-Boolean (PB) constraint is much more expressive and hence it is easier to define problems with the help of PB constraints. But while PB constraints provide us with a high-level problem description, it has been shown that solving PB constraints can be done faster with the help of a SAT solver. To apply such a solver to a PB constraint we have to encode it with clauses into conjunctive normal form (CNF). While we can find a basic encoding into CNF which is equivalent to a given PB constraint, the solving time of a SAT solver significantly depends on different properties of an encoding, e.g. the number of clauses or if generalized arc consistency (GAC) is maintained during the search for a solution. There are various PB encodings that try to optimize or balance these properties. This thesis is about such encodings. For a better understanding of the research field an overview about the state-of-the art encodings is given. The focus of the overview is a simple but complete description of each encoding, such that any reader could use, implement and extent them in his own work. In addition two novel encodings are presented: The Sequential Weight Counter (SWC) encoding and the Binary Merger Encoding. While the SWC encoding provides a very simple structure – it is listed in four lines – empirical evaluation showed its practical usefulness in various applications. The Binary Merger encoding reduces the number of clauses a PB encoding needs while having the important GAC property. To the best of our knowledge currently no other encoding has a lower upper bound for the number of clauses produced by a PB encoding with this property. This is an important improvement of the state-of-the art, since both GAC and a low number of clauses are vital for an improved solving time of the SAT solver. The thesis also contributes to the development of new applications for PB constraint encodings. The programming library PBLib provides researchers with an open source implementation of almost all PB encodings – including the encodings for the special cases at-most-one and cardinality constraints. The PBLib is also the foundation of the presented weighted MaxSAT solver optimax, the PBO solver pbsolver and the WBO, PBO and weighted MaxSAT solver npSolver

    Rethinking Lipschitz Neural Networks and Certified Robustness: A Boolean Function Perspective

    Full text link
    Designing neural networks with bounded Lipschitz constant is a promising way to obtain certifiably robust classifiers against adversarial examples. However, the relevant progress for the important ℓ∞\ell_\infty perturbation setting is rather limited, and a principled understanding of how to design expressive ℓ∞\ell_\infty Lipschitz networks is still lacking. In this paper, we bridge the gap by studying certified ℓ∞\ell_\infty robustness from a novel perspective of representing Boolean functions. We derive two fundamental impossibility results that hold for any standard Lipschitz network: one for robust classification on finite datasets, and the other for Lipschitz function approximation. These results identify that networks built upon norm-bounded affine layers and Lipschitz activations intrinsically lose expressive power even in the two-dimensional case, and shed light on how recently proposed Lipschitz networks (e.g., GroupSort and ℓ∞\ell_\infty-distance nets) bypass these impossibilities by leveraging order statistic functions. Finally, based on these insights, we develop a unified Lipschitz network that generalizes prior works, and design a practical version that can be efficiently trained (making certified robust training free). Extensive experiments show that our approach is scalable, efficient, and consistently yields better certified robustness across multiple datasets and perturbation radii than prior Lipschitz networks. Our code is available at https://github.com/zbh2047/SortNet.Comment: 37 pages; to appear in NeurIPS 2022 (Oral
    • …
    corecore