17,270 research outputs found

    Design and implementation of high-radix arithmetic systems based on the SDNR/RNS data representation

    Get PDF
    This project involved the design and implementation of high-radix arithmetic systems based on the hybrid SDNRIRNS data representation. Some real-time applications require a real-time arithmetic system. An SDNR/RNS arithmetic system provides parallel, real-time processing. The advantages and disadvantages of high-radix SDNR/RNS arithmetic, and the feasibility of implementing SDNR/RNS arithmetic systems in CMOS VLSI technology, were investigated in this project. A common methodological model, which included the stages of analysis, design, implementation, testing, and simulation, was followed. The combination of the SDNR and RNS transforms potential complex logic networks into simpler logic blocks. It was found that when constructing a SDNRIRNS adder, factors such as the radix, digit set, and moduli must be taken into account. There are many avenues still to explore. For example, implementing other arithmetic systems in the same CMOS VLSI technology used in this project and comparing them to equivalent SDNR/RNS systems would provide a set of benchmarks. These benchmarks would be useful in addressing issues relating to relative performance

    Low Power Reversible Parallel Binary Adder/Subtractor

    Get PDF
    In recent years, Reversible Logic is becoming more and more prominent technology having its applications in Low Power CMOS, Quantum Computing, Nanotechnology, and Optical Computing. Reversibility plays an important role when energy efficient computations are considered. In this paper, Reversible eight-bit Parallel Binary Adder/Subtractor with Design I, Design II and Design III are proposed. In all the three design approaches, the full Adder and Subtractors are realized in a single unit as compared to only full Subtractor in the existing design. The performance analysis is verified using number reversible gates, Garbage input/outputs and Quantum Cost. It is observed that Reversible eight-bit Parallel Binary Adder/Subtractor with Design III is efficient compared to Design I, Design II and existing design.Comment: 12 pages,VLSICS Journa

    Logic Simulation: Statistics and Machine Design

    Get PDF
    The high costs associated with logic simulation of large VLSI based systems have led to the need for new computer architectures tailored to the simulation task. Such architecture have the potential for significant speed-ups over standard software based logic simulators. Several commercial simulation engines have bene produced to satisfy needs in this area. To properly explore the space of alternative simulation architectures, data is required on the simulation process itself. This paper presents a framework for such data gathering activity and uses the data in estimating the maximum speed-up attainable with a particular type of special-purpose parallel/pipelined simulation machine. First, possible sources of speed-up in the logic simulation task are examined. Then, the sort of data needed in the design of simulation engines is discussed. Next, the data is presented and the implication on machine design are discussed. This data includes information on subtask times found in standard discrete-event simulation algorithms, event intensities, queue length distributions, and simultaneous event distributions. Finally, a simple performance model of one type of simulation machine is developed, and the maximum speed-up attainable with this type of machine is predicted

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Parallel VLSI architecture emulation and the organization of APSA/MPP

    Get PDF
    The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms
    • …
    corecore