95 research outputs found

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    VLSI Design

    Get PDF
    This book provides some recent advances in design nanometer VLSI chips. The selected topics try to present some open problems and challenges with important topics ranging from design tools, new post-silicon devices, GPU-based parallel computing, emerging 3D integration, and antenna design. The book consists of two parts, with chapters such as: VLSI design for multi-sensor smart systems on a chip, Three-dimensional integrated circuits design for thousand-core processors, Parallel symbolic analysis of large analog circuits on GPU platforms, Algorithms for CAD tools VLSI design, A multilevel memetic algorithm for large SAT-encoded problems, etc

    Layout optimization in ultra deep submicron VLSI design

    Get PDF
    As fabrication technology keeps advancing, many deep submicron (DSM) effects have become increasingly evident and can no longer be ignored in Very Large Scale Integration (VLSI) design. In this dissertation, we study several deep submicron problems (eg. coupling capacitance, antenna effect and delay variation) and propose optimization techniques to mitigate these DSM effects in the place-and-route stage of VLSI physical design. The place-and-route stage of physical design can be further divided into several steps: (1) Placement, (2) Global routing, (3) Layer assignment, (4) Track assignment, and (5) Detailed routing. Among them, layer/track assignment assigns major trunks of wire segments to specific layers/tracks in order to guide the underlying detailed router. In this dissertation, we have proposed techniques to handle coupling capacitance at the layer/track assignment stage, antenna effect at the layer assignment, and delay variation at the ECO (Engineering Change Order) placement stage, respectively. More specifically, at layer assignment, we have proposed an improved probabilistic model to quickly estimate the amount of coupling capacitance for timing optimization. Antenna effects are also handled at layer assignment through a linear-time tree partitioning algorithm. At the track assignment stage, timing is further optimized using a graph based technique. In addition, we have proposed a novel gate splitting methodology to reduce delay variation in the ECO placement considering spatial correlations. Experimental results on benchmark circuits showed the effectiveness of our approaches

    Regular cell design approach considering lithography-induced process variations

    Get PDF
    The deployment delays for EUVL, forces IC design to continue using 193nm wavelength lithography with innovative and costly techniques in order to faithfully print sub-wavelength features and combat lithography induced process variations. The effect of the lithography gap in current and upcoming technologies is to cause severe distortions due to optical diffraction in the printed patterns and thus degrading manufacturing yield. Therefore, a paradigm shift in layout design is mandatory towards more regular litho-friendly cell designs in order to improve line pattern resolution. However, it is still unclear the amount of layout regularity that can be introduced and how to measure the benefits and weaknesses of regular layouts. This dissertation is focused on searching the degree of layout regularity necessary to combat lithography variability and outperform the layout quality of a design. The main contributions that have been addressed to accomplish this objective are: (1) the definition of several layout design guidelines to mitigate lithography variability; (2) the proposal of a parametric yield estimation model to evaluate the lithography impact on layout design; (3) the development of a global Layout Quality Metric (LQM) including a Regularity Metric (RM) to capture the degree of layout regularity of a layout implementation and; (4) the creation of different layout architectures exploiting the benefits of layout regularity to outperform line-pattern resolution, referred as Adaptive Lithography Aware Regular Cell Designs (ALARCs). The first part of this thesis provides several regular layout design guidelines derived from lithography simulations so that several important lithography related variation sources are minimized. Moreover, a design level methodology, referred as gate biasing, is proposed to overcome systematic layout dependent variations, across-field variations and the non-rectilinear gate effect (NRG) applied to regular fabrics by properly configuring the drawn transistor channel length. The second part of this dissertation proposes a lithography yield estimation model to predict the amount of lithography distortion expected in a printed layout due to lithography hotspots with a reduced set of lithography simulations. An efficient lithography hotspot framework to identify the different layout pattern configurations, simplify them to ease the pattern analysis and classify them according to the lithography degradation predicted using lithography simulations is presented. The yield model is calibrated with delay measurements of a reduced set of identical test circuits implemented in a CMOS 40nm technology and thus actual silicon data is utilized to obtain a more realistic yield estimation. The third part of this thesis presents a configurable Layout Quality Metric (LQM) that considering several layout aspects provides a global evaluation of a layout design with a single score. The LQM can be leveraged by assigning different weights to each evaluation metric or by modifying the parameters under analysis. The LQM is here configured following two different set of partial metrics. Note that the LQM provides a regularity metric (RM) in order to capture the degree of layout regularity applied in a layout design. Lastly, this thesis presents different ALARC designs for a 40nm technology using different degrees of layout regularity and different area overheads. The quality of the gridded regular templates is demonstrated by automatically creating a library containing 266 cells including combinational and sequential cells and synthesizing several ITC'99 benchmark circuits. Note that the regular cell libraries only presents a 9\% area penalty compared to the 2D standard cell designs used for comparison and thus providing area competitive designs. The layout evaluation of benchmark circuits considering the LQM shows that regular layouts can outperform other 2D standard cell designs depending on the layout implementation.Los continuos retrasos en la implementación de la EUVL, fuerzan que el diseño de IC se realice mediante litografía de longitud de onda de 193 nm con innovadoras y costosas técnicas para poder combatir variaciones de proceso de litografía. La gran diferencia entre la longitud de onda y el tamaño de los patrones causa severas distorsiones debido a la difracción óptica en los patrones impresos y por lo tanto degradando el yield. En consecuencia, es necesario realizar un cambio en el diseño de layouts hacia diseños más regulares para poder mejorar la resolución de los patrones. Sin embargo, todavía no está claro el grado de regularidad que se debe introducir y como medir los beneficios y los perjuicios de los diseños regulares. El objetivo de esta tesis es buscar el grado de regularidad necesario para combatir las variaciones de litografía y mejorar la calidad del layout de un diseño. Las principales contribuciones para conseguirlo son: (1) la definición de diversas reglas de diseño de layout para mitigar las variaciones de litografía; (2) la propuesta de un modelo para estimar el yield paramétrico y así evaluar el impacto de la litografía en el diseño de layout; (3) el diseño de una métrica para analizar la calidad de un layout (LQM) incluyendo una métrica para capturar el grado de regularidad de un diseño (RM) y; (4) la creación de diferentes tipos de layout explotando los beneficios de la regularidad, referidos como Adaptative Lithography Aware Regular Cell Designs (ALARCs). La primera parte de la tesis, propone las diversas reglas de diseño para layouts regulares derivadas de simulaciones de litografía de tal manera que las fuentes de variación de litografía son minimizadas. Además, se propone una metodología de diseño para layouts regulares, referida como "gate biasing" para contrarrestar las variaciones sistemáticas dependientes del layout, las variaciones en la ventana de proceso del sistema litográfico y el efecto de puerta no rectilínea para configurar la longitud del canal del transistor correctamente. La segunda parte de la tesis, detalla el modelo de estimación del yield de litografía para predecir mediante un número reducido de simulaciones de litografía la cantidad de distorsión que se espera en un layout impreso debida a "hotspots". Se propone una eficiente metodología que identifica los distintos patrones de un layout, los simplifica para facilitar el análisis de los patrones y los clasifica en relación a la degradación predecida mediante simulaciones de litografía. El modelo de yield se calibra utilizando medidas de tiempo de un número reducido de idénticos circuitos de test implementados en una tecnología CMOS de 40nm y de esta manera, se utilizan datos de silicio para obtener una estimación realista del yield. La tercera parte de este trabajo, presenta una métrica para medir la calidad del layout (LQM), que considera diversos aspectos para dar una evaluación global de un diseño mediante un único valor. La LQM puede ajustarse mediante la asignación de diferentes pesos para cada métrica de evaluación o modificando los parámetros analizados. La LQM se configura mediante dos conjuntos de medidas diferentes. Además, ésta incluye una métrica de regularidad (RM) para capturar el grado de regularidad que se aplica en un diseño. Finalmente, esta disertación presenta los distintos diseños ALARC para una tecnología de 40nm utilizando diversos grados de regularidad y diferentes impactos en área. La calidad de estos diseños se demuestra creando automáticamente una librería de 266 celdas incluyendo celdas combinacionales y secuenciales y, sintetizando diversos circuitos ITC'99. Las librerías regulares solo presentan un 9% de impacto en área comparado con diseños de celdas estándar 2D y por tanto proponiendo diseños competitivos en área. La evaluación de los circuitos considerando la LQM muestra que los diseños regulares pueden mejorar otros diseños 2D dependiendo de la implementación del layout

    Flash Memory Devices

    Get PDF
    Flash memory devices have represented a breakthrough in storage since their inception in the mid-1980s, and innovation is still ongoing. The peculiarity of such technology is an inherent flexibility in terms of performance and integration density according to the architecture devised for integration. The NOR Flash technology is still the workhorse of many code storage applications in the embedded world, ranging from microcontrollers for automotive environment to IoT smart devices. Their usage is also forecasted to be fundamental in emerging AI edge scenario. On the contrary, when massive data storage is required, NAND Flash memories are necessary to have in a system. You can find NAND Flash in USB sticks, cards, but most of all in Solid-State Drives (SSDs). Since SSDs are extremely demanding in terms of storage capacity, they fueled a new wave of innovation, namely the 3D architecture. Today “3D” means that multiple layers of memory cells are manufactured within the same piece of silicon, easily reaching a terabit capacity. So far, Flash architectures have always been based on "floating gate," where the information is stored by injecting electrons in a piece of polysilicon surrounded by oxide. On the contrary, emerging concepts are based on "charge trap" cells. In summary, flash memory devices represent the largest landscape of storage devices, and we expect more advancements in the coming years. This will require a lot of innovation in process technology, materials, circuit design, flash management algorithms, Error Correction Code and, finally, system co-design for new applications such as AI and security enforcement

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems
    corecore