1,742 research outputs found

    Custom Integrated Circuits

    Get PDF
    Contains table of contents for Part III, table of contents for Section 1 and reports on eleven research projects.IBM CorporationMIT School of EngineeringNational Science Foundation Grant MIP 94-23221Defense Advanced Research Projects Agency/U.S. Army Intelligence Center Contract DABT63-94-C-0053Mitsubishi CorporationNational Science Foundation Young Investigator Award Fellowship MIP 92-58376Joint Industry Program on Offshore Structure AnalysisAnalog DevicesDefense Advanced Research Projects AgencyCadence Design SystemsMAFET ConsortiumConsortium for Superconducting ElectronicsNational Defense Science and Engineering Graduate FellowshipDigital Equipment CorporationMIT Lincoln LaboratorySemiconductor Research CorporationMultiuniversity Research IntiativeNational Science Foundatio

    Advanced Algorithms for VLSI: Statistical Circuit Optimization and Cyclic Circuit Analysis

    Get PDF
    This work focuses on two emerging fields in VLSI. The first is use of statistical formulations to tackle one of the classical problems in VLSI design and analysis domains, namely gate sizing. The second is on analysis of nontraditional digital systems in the form of cyclic combinational circuits. In the first part, a new approach for enhancing the process-variation tolerance of digital circuits is described. We extend recent advances in statistical timing analysis into an optimization framework. Our objective is to reduce the performance variance of a technology-mapped circuit where delays across elements are represented by random variables which capture the manufacturing variations. We introduce the notion of statistical critical paths, which account for both means and variances of performance variation. An optimization engine is used to size gates with a goal of reducing the timing variance along the statistical critical paths. Circuit optimization is carried out using a gain-based gate sizing algorithm that terminates when constraints are satisfied or no further improvements can be made. We show optimization results that demonstrate an average of 72% reduction in performance variation at the expense of average 20% increase in design area. In the second part, we tackle the problem of analyzing cyclic circuits. Compiling high-level hardware languages can produce circuits containing combinational cycles that can never be sensitized. Such circuits do have well-defined functional behavior, but wreak havoc with most tools, which assume acyclic combinational logic. As such, some sort of cycle-removal step is usually necessary. We present an algorithm able to quickly and exactly characterize all combinational behavior of a cyclic circuit. It used a combination of explicit and implicit methods to compute input patterns that make the circuit behave combinationally. This can be used to restructure the circuit into an acyclic equivalent, report errors, or as an optimization aid. Experiments show our algorithm runs several orders of magnitude faster than existing ones on real-life cyclic circuits, making it useful in practice

    EDEN: A high-performance, general-purpose, NeuroML-based neural simulator

    Get PDF
    Modern neuroscience employs in silico experimentation on ever-increasing and more detailed neural networks. The high modelling detail goes hand in hand with the need for high model reproducibility, reusability and transparency. Besides, the size of the models and the long timescales under study mandate the use of a simulation system with high computational performance, so as to provide an acceptable time to result. In this work, we present EDEN (Extensible Dynamics Engine for Networks), a new general-purpose, NeuroML-based neural simulator that achieves both high model flexibility and high computational performance, through an innovative model-analysis and code-generation technique. The simulator runs NeuroML v2 models directly, eliminating the need for users to learn yet another simulator-specific, model-specification language. EDEN's functional correctness and computational performance were assessed through NeuroML models available on the NeuroML-DB and Open Source Brain model repositories. In qualitative experiments, the results produced by EDEN were verified against the established NEURON simulator, for a wide range of models. At the same time, computational-performance benchmarks reveal that EDEN runs up to 2 orders-of-magnitude faster than NEURON on a typical desktop computer, and does so without additional effort from the user. Finally, and without added user effort, EDEN has been built from scratch to scale seamlessly over multiple CPUs and across computer clusters, when available.Comment: 29 pages, 9 figure

    VLSI signal processing through bit-serial architectures and silicon compilation

    Get PDF

    Bridging the gap : an optimization-based framework for fast, simultaneous circuit & system design space exploration

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 107-110).Design of modern mixed signal integrated circuits is becoming increasingly difficult. Continued MOSFET scaling is approaching the global power dissipation limits while increasing transistor variability, thus requiring careful allocation of power and area resources to achieve increasingly more aggressive performance specifications. In this tightly constrained environment traditional iterative system-to-circuit redesign loop, is becoming inefficient. With complex system architectures and circuit specifications approaching technological limits of the process employed, the designers have less room to margin for the overhead of strict system and circuit design interdependencies. Severely constrained modern mixed IC design can take many iterations to converge in such a design flow. This is an expensive and time consuming process. The situation is particularly acute in high-speed links. As an important building block of many systems (high speed I/O, on-chip communication, ...) power efficiency and area footprint are of utmost importance. Design of these systems is challenging in both system and circuit domain. On one hand system architectures are becoming increasingly complex to provide necessary performance increase. On the other, circuit implementation of these increasingly complicated systems is difficult to achieve under tight power and area budget. To bridge this gap between system and circuit design, we formulate a circuit-to-system optimization-driven framework. It is an equation-based description, powered by a human designer. Provided with equation-based model we use fast optimization tools to quickly scout the available design space. Presence of a designer in the flow is invaluable resource enabling significant saving by simplifying the models to capture only the relevant information and constraining the search space to areas where meaningful solutions might be expected to be found.(cont) Thus, the computational effort overhead that plagues the simulation-based design space exploration and design optimization is greatly reduced. The flow is powered by a signomial optimization engine. The key challenge is to bring, from the modeling point of view, very different problems such as circuit design and system design into the realm of an optimization engine that can solve them jointly, thus breaking the re-design loop or at least cutting it shorter. Relying on signomial programming is necessary in order to accurately model all the necessary phenomenons that arise in electrical circuits and at system level. For example, defining regions of operation of transistors under polarization conditions can not be modeled accurately with simpler type of equations. Similarly, calculating the effect of filtering to a signal also requires possibility to handle signomial equations. Thus, signomial programming is necessary yet not fully explored and finding suitable formulation might take some experimenting as we will see in this thesis. Signomial programming, as a general non-convex optimization problem, is still an active research area. Most of the solutions proposed so far involve local convexification of the problem in addition to branch & bound type of search. Furthermore, most of the non-convex problems are solved for one particular system of equations, and general methodology that is reliable and efficient is not known. Thus, a big part the work to be presented in this thesis is detailing how to construct a system formulation that the optimization engine can solve efficiently and reliably. We tested different formulations and their performance measured in terms of parsing and solving speed and accuracy. From these tests we motivate and explain how a series of transformations we introduce improve our formulation and arrive to a well-behaved and reliable form. We show how to apply our design flow in high-speed link design.(cont) By restructuring the traditional design flow we derive system and circuit abstractions. These sub-problems are interfaced through a set of well defined interface variables, which enables code level separation of problem descriptions, thus building a modular and easy to read and maintain system and circuit model. Finally we develop a set of scripts to automate formulating parametrized system level description. We explain how our transformations influence the speed of this process as well as the size of the model produced.by Ranko Sredojević.S.M

    NASA SERC 1990 Symposium on VLSI Design

    Get PDF
    This document contains papers presented at the first annual NASA Symposium on VLSI Design. NASA's involvement in this event demonstrates a need for research and development in high performance computing. High performance computing addresses problems faced by the scientific and industrial communities. High performance computing is needed in: (1) real-time manipulation of large data sets; (2) advanced systems control of spacecraft; (3) digital data transmission, error correction, and image compression; and (4) expert system control of spacecraft. Clearly, a valuable technology in meeting these needs is Very Large Scale Integration (VLSI). This conference addresses the following issues in VLSI design: (1) system architectures; (2) electronics; (3) algorithms; and (4) CAD tools

    NASA Tech Briefs, August 2006

    Get PDF
    Topics covered include: Measurement and Controls Data Acquisition System IMU/GPS System Provides Position and Attitude Data Using Artificial Intelligence to Inform Pilots of Weather Fast Lossless Compression of Multispectral-Image Data Developing Signal-Pattern-Recognition Programs Implementing Access to Data Distributed on Many Processors Compact, Efficient Drive Circuit for a Piezoelectric Pump; Dual Common Planes for Time Multiplexing of Dual-Color QWIPs; MMIC Power Amplifier Puts Out 40 mW From 75 to 110 GHz; 2D/3D Visual Tracker for Rover Mast; Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements; Vaporizable Scaffolds for Fabricating Thermoelectric Modules; Producing Quantum Dots by Spray Pyrolysis; Mobile Robot for Exploring Cold Liquid/Solid Environments; System Would Acquire Core and Powder Samples of Rocks; Improved Fabrication of Lithium Films Having Micron Features; Manufacture of Regularly Shaped Sol-Gel Pellets; Regulating Glucose and pH, and Monitoring Oxygen in a Bioreactor; Satellite Multiangle Spectropolarimetric Imaging of Aerosols; Interferometric System for Measuring Thickness of Sea Ice; Microscale Regenerative Heat Exchanger Protocols for Handling Messages Between Simulation Computers Statistical Detection of Atypical Aircraft Flights NASA's Aviation Safety and Modeling Project Multimode-Guided-Wave Ultrasonic Scanning of Materials Algorithms for Maneuvering Spacecraft Around Small Bodies Improved Solar-Radiation-Pressure Models for GPS Satellites Measuring Attitude of a Large, Flexible, Orbiting Structur

    Learning Approaches to Analog and Mixed Signal Verification and Analysis

    Get PDF
    The increased integration and interaction of analog and digital components within a system has amplified the need for a fast, automated, combined analog, and digital verification methodology. There are many automated characterization, test, and verification methods used in practice for digital circuits, but analog and mixed signal circuits suffer from long simulation times brought on by transistor-level analysis. Due to the substantial amount of simulations required to properly characterize and verify an analog circuit, many undetected issues manifest themselves in the manufactured chips. Creating behavioral models, a circuit abstraction of analog components assists in reducing simulation time which allows for faster exploration of the design space. Traditionally, creating behavioral models for non-linear circuits is a manual process which relies heavily on design knowledge for proper parameter extraction and circuit abstraction. Manual modeling requires a high level of circuit knowledge and often fails to capture critical effects stemming from block interactions and second order device effects. For this reason, it is of interest to extract the models directly from the SPICE level descriptions so that these effects and interactions can be properly captured. As the devices are scaled, process variations have a more profound effect on the circuit behaviors and performances. Creating behavior models from the SPICE level descriptions, which include input parameters and a large process variation space, is a non-trivial task. In this dissertation, we focus on addressing various problems related to the design automation of analog and mixed signal circuits. Analog circuits are typically highly specialized and fined tuned to fit the desired specifications for any given system reducing the reusability of circuits from design to design. This hinders the advancement of automating various aspects of analog design, test, and layout. At the core of many automation techniques, simulations, or data collection are required. Unfortunately, for some complex analog circuits, a single simulation may take many days. This prohibits performing any type of behavior characterization or verification of the circuit. This leads us to the first fundamental problem with the automation of analog devices. How can we reduce the simulation cost while maintaining the robustness of transistor level simulations? As analog circuits can vary vastly from one design to the next and are hardly ever comprised of standard library based building blocks, the second fundamental question is how to create automated processes that are general enough to be applied to all or most circuit types? Finally, what circuit characteristics can we utilize to enhance the automation procedures? The objective of this dissertation is to explore these questions and provide suitable evidence that they can be answered. We begin by exploring machine learning techniques to model the design space using minimal simulation effort. Circuit partitioning is employed to reduce the complexity of the machine learning algorithms. Using the same partitioning algorithm we further explore the behavior characterization of analog circuits undergoing process variation. The circuit partitioning is general enough to be used by any CMOS based analog circuit. The ideas and learning gained from behavioral modeling during behavior characterization are used to improve the simulation through event propagation, input space search, complexity and information measurements. The reduction of the input space and behavioral modeling of low complexity, low information primitive elements reduces the simulation time of large analog and mixed signal circuits by 50-75%. The method is extended and applied to assist in analyzing analog circuit layout. All of the proposed methods are implemented on analog circuits ranging from small benchmark circuits to large, highly complex and specialized circuits. The proposed dependency based partitioning of large analog circuits in the time domain allows for fast identification of highly sensitive transistors as well as provides a natural division of circuit components. Modeling analog circuits in the time domain with this partitioning technique and SVM learning algorithms allows for very fast transient behavior predictions, three orders of magnitude faster than traditional simulators, while maintaining 95% accuracy. Analog verification can be explored through a reduction of simulation time by utilizing the partitions, information and complexity measures, and input space reduction. Behavioral models are created using supervised learning techniques for detected primitive elements. We will show the effectiveness of the method on four analog circuits where the simulation time is decreased by 55-75%. Utilizing the reduced simulation method, critical nodes can be found quickly and efficiently. The nodes found using this method match those found by an experienced layout engineer, but are detected automatically given the design and input specifications. The technique is further extended to find the tolerance of transistors to both process variation and power supply fluctuation. This information allows for corrections in layout overdesign or guidance in placing noise reducing components such as guard rings or decoupling capacitors. The proposed approaches significantly reduce the simulation time required to perform the tasks traditionally, maintain high accuracy, and can be automated
    corecore