11 research outputs found

    Linear constraint graph for floorplan optimization with soft blocks

    Full text link
    Abstract—In this paper, we propose the Linear Constraint Graph (LCG) as an efficient general floorplan representation. For n blocks, an LCG has at most 2n+3 vertices and at most 6n+2 edges. Operations with direct geometric meanings are developed to perturb the LCGs. We apply the LCGs to the floorplan optimization with soft blocks to leverage its advantage in terms of the sizes of the graphs, which will improve the efficiency of solving a complex mathematical program in the inner loop of the optimization that decide the block shapes without introducing overlaps to the non-slicing floorplans. Experimental results confirm that the LCGs are effective and efficient. I

    Floorplan-guided placement for large-scale mixed-size designs

    Get PDF
    In the nanometer scale era, placement has become an extremely challenging stage in modern Very-Large-Scale Integration (VLSI) designs. Millions of objects need to be placed legally within a chip region, while both the interconnection and object distribution have to be optimized simultaneously. Due to the extensive use of Intellectual Property (IP) and embedded memory blocks, a design usually contains tens or even hundreds of big macros. A design with big movable macros and numerous standard cells is known as mixed-size design. Due to the big size difference between big macros and standard cells, the placement of mixed-size designs is much more difficult than the standard-cell placement. This work presents an efficient and high-quality placement tool to handle modern large-scale mixed-size designs. This tool is developed based on a new placement algorithm flow. The main idea is to use the fixed-outline floorplanning algorithm to guide the state-of-the-art analytical placer. This new flow consists of four steps: 1) The objects in the original netlist are clustered into blocks; 2) Floorplanning is performed on the blocks; 3) The blocks are shifted within the chip region to further optimize the wirelength; 4) With big macro locations fixed, incremental placement is applied to place the remaining objects. Several key techniques are proposed to be used in the first two steps. These techniques are mainly focused on the following two aspects: 1) Hypergraph clustering algorithm that can cut down the original problem size without loss of placement Quality of Results (QoR); 2) Fixed-outline floorplanning algorithm that can provide a good guidance to the analytical placer at the global level. The effectiveness of each key technique is demonstrated by promising experimental results compared with the state-of-the-art algorithms. Moreover, using the industrial mixed-size designs, the new placement tool shows better performance than other existing approaches

    Parametric Yield of VLSI Systems under Variability: Analysis and Design Solutions

    Get PDF
    Variability has become one of the vital challenges that the designers of integrated circuits encounter. variability becomes increasingly important. Imperfect manufacturing process manifest itself as variations in the design parameters. These variations and those in the operating environment of VLSI circuits result in unexpected changes in the timing, power, and reliability of the circuits. With scaling transistor dimensions, process and environmental variations become significantly important in the modern VLSI design. A smaller feature size means that the physical characteristics of a device are more prone to these unaccounted-for changes. To achieve a robust design, the random and systematic fluctuations in the manufacturing process and the variations in the environmental parameters should be analyzed and the impact on the parametric yield should be addressed. This thesis studies the challenges and comprises solutions for designing robust VLSI systems in the presence of variations. Initially, to get some insight into the system design under variability, the parametric yield is examined for a small circuit. Understanding the impact of variations on the yield at the circuit level is vital to accurately estimate and optimize the yield at the system granularity. Motivated by the observations and results, found at the circuit level, statistical analyses are performed, and solutions are proposed, at the system level of abstraction, to reduce the impact of the variations and increase the parametric yield. At the circuit level, the impact of the supply and threshold voltage variations on the parametric yield is discussed. Here, a design centering methodology is proposed to maximize the parametric yield and optimize the power-performance trade-off under variations. In addition, the scaling trend in the yield loss is studied. Also, some considerations for design centering in the current and future CMOS technologies are explored. The investigation, at the circuit level, suggests that the operating temperature significantly affects the parametric yield. In addition, the yield is very sensitive to the magnitude of the variations in supply and threshold voltage. Therefore, the spatial variations in process and environmental variations make it necessary to analyze the yield at a higher granularity. Here, temperature and voltage variations are mapped across the chip to accurately estimate the yield loss at the system level. At the system level, initially the impact of process-induced temperature variations on the power grid design is analyzed. Also, an efficient verification method is provided that ensures the robustness of the power grid in the presence of variations. Then, a statistical analysis of the timing yield is conducted, by taking into account both the process and environmental variations. By considering the statistical profile of the temperature and supply voltage, the process variations are mapped to the delay variations across a die. This ensures an accurate estimation of the timing yield. In addition, a method is proposed to accurately estimate the power yield considering process-induced temperature and supply voltage variations. This helps check the robustness of the circuits early in the design process. Lastly, design solutions are presented to reduce the power consumption and increase the timing yield under the variations. In the first solution, a guideline for floorplaning optimization in the presence of temperature variations is offered. Non-uniformity in the thermal profiles of integrated circuits is an issue that impacts the parametric yield and threatens chip reliability. Therefore, the correlation between the total power consumption and the temperature variations across a chip is examined. As a result, floorplanning guidelines are proposed that uses the correlation to efficiently optimize the chip's total power and takes into account the thermal uniformity. The second design solution provides an optimization methodology for assigning the power supply pads across the chip for maximizing the timing yield. A mixed-integer nonlinear programming (MINLP) optimization problem, subject to voltage drop and current constraint, is efficiently solved to find the optimum number and location of the pads

    Timing-Driven Macro Placement

    Get PDF
    Placement is an important step in the process of finding physical layouts for electronic computer chips. The basic task during placement is to arrange the building blocks of the chip, the circuits, disjointly within a given chip area. Furthermore, such positions should result in short circuit interconnections which can be routed easily and which ensure all signals arrive in time. This dissertation mostly focuses on macros, the largest circuits on a chip. In order to optimize timing characteristics during macro placement, we propose a new optimistic timing model based on geometric distance constraints. This model can be computed and evaluated efficiently in order to predict timing traits accurately in practice. Packing rectangles disjointly remains strongly NP-hard under slack maximization in our timing model. Despite of this we develop an exact, linear time algorithm for special cases. The proposed timing model is incorporated into BonnMacro, the macro placement component of the BonnTools physical design optimization suite developed at the Research Institute for Discrete Mathematics. Using efficient formulations as mixed-integer programs we can legalize macros locally while optimizing timing. This results in the first timing-aware macro placement tool. In addition, we provide multiple enhancements for the partitioning-based standard circuit placement algorithm BonnPlace. We find a model of partitioning as minimum-cost flow problem that is provably as small as possible using which we can avoid running time intensive instances. Moreover we propose the new global placement flow Self-Stabilizing BonnPlace. This approach combines BonnPlace with a force-directed placement framework. It provides the flexibility to optimize the two involved objectives, routability and timing, directly during placement. The performance of our placement tools is confirmed on a large variety of academic benchmarks as well as real-world designs provided by our industrial partner IBM. We reduce running time of partitioning significantly and demonstrate that Self-Stabilizing BonnPlace finds easily routable placements for challenging designs – even when simultaneously optimizing timing objectives. BonnMacro and Self-Stabilizing BonnPlace can be combined to the first timing-driven mixed-size placement flow. This combination often finds placements with competitive timing traits and even outperforms solutions that have been determined manually by experienced designers

    Algorithms for the scaling toward nanometer VLSI physical synthesis

    Get PDF
    Along the history of Very Large Scale Integration (VLSI), we have successfully scaled down the size of transistors, scaled up the speed of integrated circuits (IC) and the number of transistors in a chip - these are just a few examples of our achievement in VLSI scaling. It is projected to enter the nanometer (timing estimation and buffer planning for global routing and other early stages such as floorplanning. A novel path based buffer insertion scheme is also included, which can overcome the weakness of the net based approaches. Part-2 Circuit clustering techniques with the application in Field-Programmable Gate Array (FPGA) technology mapping The problem of timing driven n-way circuit partitioning with application to FPGA technology mapping is studied and a hierarchical clustering approach is presented for the latest multi-level FPGA architectures. Moreover, a more general delay model is included in order to accurately characterize the delay behavior of the clusters and circuit elements

    Design Methods and Tools for Application-Specific Predictable Networks-on-Chip

    Get PDF
    As the complexity of applications grows with each new generation, so does the demand for computation power. To satisfy the computation demands at manageable power levels, we see a shift in the design paradigm from single processor systems to Multiprocessor Systems-on-Chip (MPSoCs). MPSoCs leverage the parallelism in applications to increase the performance at the same power levels. To further improve the computation to power consumption ratio, MPSoCs for embedded applications are heterogeneous and integrate cores that are specialized to perform the different functionalities of the application. With technology scaling, wire power consumption is increasing compared to logic, making communication as expensive as computation. Therefore customizing the interconnect is necessary to achieve energy efficiency. Designing an optimal application specific Network-on-Chip (NoC), that meets application demands, requires the exploration of a large design space. Automatic design and optimization of the NoC is required in order to achieve fast design closure, especially for heterogeneous MPSoCs. To continue to meet the computation requirements of future applications new technologies are emerging. Three dimensional integration promises to increase the number of transistors by stacking multiple silicon layers. This will lead to an increase in the number of cores of the MPSoCs resulting in increased communication demands. To compensate for the increase in the wire delay in new technology nodes as well as to reduce the power consumption further, multi-synchronous design is becoming popular. With multiple clock signals, different parts of the MPSoC can be clocked at different frequencies according to the current demands of the application and can even be shutdown when they are not used at all. This further complicates the design of the NoC.Many applications require different levels of guarantee from the NoC in order to perform their functionality correctly. As communication traffic patterns become more complex, the performance of the NoC can no longer be predicted statically. Therefore designing the interconnect network requires that such guarantees are provided during the dynamic operation of the system which includes the interaction with major subsystems (i.e., main memory) and not just the interconnect itself. In this thesis, I present novel methods to design application-specific NoCs that meet performance demands, under the constraints of new technologies. To provide different levels of Quality of Service, I integrate methods to estimate the NoC performance during the design phase of the interconnect topology. I present methods and architectures for NoCs to efficiently access memory systems, in order to achieve predictable operation of the systems from the point of view of the communication as well as the bottleneck target devices. Therefore the main contribution of the thesis is twofold: scientific as I propose new algorithms to perform topology synthesis and engineering by presenting extensive experiments and architectures for NoC design

    Experimental, Numerical and Field Approaches to Scour Research

    Get PDF
    This book presents fourteen state-of-the-art research papers prepared by research scientists and engineers around the world. They explore the subject of scour related to bridge piers, monopiles, propellers, turbines, weirs, dams, grade-control structures, and pipelines. Their works are based on three different research methodologies, namely experimental, numerical, and field approaches

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    A Revisit to Floorplan Optimization by Lagrangian Relaxation

    No full text
    With the advent of deep sub-micron (DSM) era, floorplanning has become increasingly important in physical design process. In this paper we clarify a misunderstanding in using Lagrangian relaxation for the minimum area floorplanning problem. We show that the problem is not convex and its optimal solution cannot be obtained by solving its Lagrangian dual problem. We then propose a modified convex formulation and solve it using min-cost flow technique and trust region method. Experimental results under module aspect ratio bound [0.5,2.0] show that the running time of our floorplanner scales well with the problem size in MCNC benchmark. Compared with the floorplanner in [27], our floorplanner is 9.5X faster for the largest case “ami49”. It also generates a floorplan with smaller deadspace for almost all test cases. In addition, since the generated floorplan has an aspect ratio closer to 1, it is more friendly to packaging. Our floorplanner is also amicable to including interconnect cost and other physical design metrics
    corecore