2,632 research outputs found
Throughput-driven floorplanning with wire pipelining
The size of future high-performance SoC is such that the time-of-flight of wires connecting distant pins in the layout can be much higher than the clock period. In order to keep the frequency as high as possible, the wires may be pipelined. However, the insertion of flip-flops may alter the throughput of the system due to the presence of loops in the logic netlist. In this paper, we address the problem of floorplanning a large design where long interconnects are pipelined by inserting the throughput in the cost function of a tool based on simulated annealing. The results obtained on a series of benchmarks are then validated using a simple router that breaks long interconnects by suitably placing flip-flops along the wires
Analytical Layer Planning for Nanometer VLSI Designs
In this thesis, we proposed an intermediate sub-process between placement and routing stage in physical design. The algorithm is for generating layer guidance for post-placement optimization technique especially buffer insertion. This issue becomes critical in nowadays VLSI chip design due to the factor of timing, congestion, and increasingly non-uniform parasitic among different metal layers. Besides, as a step before routing, this layer planning algorithm accounts for routability by considering minimized overlap area between different nets. Moreover, layer directive information which is a crucial concern in industrial design is also considered in the algorithm.
The core problem is formulated as nonlinear programming problem which is composed of objective function and constraints. The problem is further solved by conjugate gradient method. The whole algorithm is implemented by C++ under Linux operating system and tested on ISPD2008 Global Routing Contest Benchmarks. The experiment results are shown in the end of this thesis and confirm the effectiveness of our approach especially in routability aspect
A branch-and-bound algorithm for stable scheduling in single-machine production systems.
Robust scheduling aims at the construction of a schedule that is protected against uncertain events. A stable schedule is a robust schedule that will change little when variations in the input parameters arise. This paper proposes a branch-and-bound algorithm for optimally solving a single-machine scheduling problem with stability objective, when a single job is anticipated to be disrupted.Branch-and-bound; Construction; Event; Job; Robust scheduling; Robustness; Scheduling; Single-machine scheduling; Stability; Systems; Uncertainty;
DNA computation
This is the first ever doctoral thesis in the field of DNA computation. The field has its roots
in the late 1950s, when the Nobel laureate Richard Feynman first introduced the concept of
computing at a molecular level. Feynman's visionary idea was only realised in 1994, when
Leonard Adleman performed the first ever truly molecular-level computation using DNA
combined with the tools and techniques of molecular biology. Since Adleman reported the
results of his seminal experiment, there has been a flurry
of interest in the idea of using DNA
to perform computations. The potential benefits of using this particular molecule are enormous:
by harnessing the massive inherent parallelism of performing concurrent operations
on trillions of strands, we may one day be able to compress the power of today's supercomputer
into a single test tube. However, if we compare the development of DNA-based
computers to that of their silicon counterparts, it is clear that molecular computers are still
in their infancy. Current work in this area is concerned mainly with abstract models of
computation and simple proof-of-principle experiments. The goal of this thesis is to present
our contribution to the field, placing it in the context of the existing body of work. Our
new results concern a general model of DNA computation, an error-resistant implementation
of the model, experimental investigation of the implementation and an assessment of
the complexity and viability of DNA computations. We begin by recounting the historical
background to the search for the structure of DNA. By providing a detailed description of
this molecule and the operations we may perform on it, we lay down the foundations for subsequent
chapters. We then describe the basic models of DNA computation that have been
proposed to date. In particular, we describe our parallel filtering model, which is the first
to provide a general framework for the elegant expression of algorithms for NP-complete
problems. The implementation of such abstract models is crucial to their success. Previous
experiments that have been carried out suffer from their reliance on various error-prone laboratory
techniques. We show for the first time how one particular operation, hybridisation
extraction, may be replaced by an error-resistant enzymatic separation technique. We also
describe a novel solution read-out procedure that utilizes cloning, and is sufficiently general
to allow it to be used in any experimental implementation. The results of preliminary
tests
of these techniques are then reported. Several important conclusions are to be drawn from these investigations, and we report these in the hope that they will provide useful experimental
guidance in the future. The final contribution of this thesis is a rigorous consideration
of the complexity and viability of DNA computations. We argue that existing analyses of
models of DNA computation are flawed and unrealistic. In order to obtain more realistic
measures of the time and space complexity of DNA computations we describe a new strong
model, and reassess previously described algorithms within it. We review the search for
"killer applications": applications of DNA computing that will establish the superiority
of
this paradigm within a certain domain. We conclude the thesis with a description of several
open problems in the field of DNA computation
Elastic circuits
Elasticity in circuits and systems provides tolerance to variations in computation and communication delays. This paper presents a comprehensive overview of elastic circuits for those designers who are mainly familiar with synchronous design. Elasticity can be implemented both synchronously and asynchronously, although it was traditionally more often associated with asynchronous circuits. This paper shows that synchronous and asynchronous elastic circuits can be designed, analyzed, and optimized using similar techniques. Thus, choices between synchronous and asynchronous implementations are localized and deferred until late in the design process.Peer ReviewedPostprint (published version
High-performance Global Routing for Trillion-gate Systems-on-Chips.
Due to aggressive transistor scaling, modern-day CMOS circuits have continually increased in both complexity and productivity. Modern semiconductor designs have narrower and more resistive wires, thereby shifting the performance bottleneck to interconnect delay. These trends considerably impact timing closure and call for improvements in high-performance physical design tools to keep pace with the current state of IC innovation.
As leading-edge designs may incorporate tens of millions of gates, algorithm and software scalability are crucial to achieving reasonable turnaround time. Moreover, with decreasing device sizes, optimizing traditional objectives is no longer sufficient.
Our research focuses on (i) expanding the capabilities of standalone global routing, (ii) extending global routing for use in different design applications, and (iii) integrating routing within broader physical design optimizations and flows, e.g., congestion-driven
placement. Our first global router relies on integer-linear programming (ILP), and can solve fairly large problem instances to optimality. Our second iterative global router relies on Lagrangian relaxation, where we relax the routing violation constraints to allowing routing overflow at a penalty. In both approaches, our desire is to give the router the maximum degree of freedom within a specified context. Empirically, both routers produce competitive results within a reasonable amount of runtime. To improve routability, we explore the incorporation of routing with placement, where the router estimates congestion and feeds this information to the placer. In turn, the emphasis on runtime is heightened, as the router will be invoked multiple times. Empirically, our placement-and-route framework significantly improves the final solution’s routability than performing the steps sequentially. To further enhance routability-driven placement, we (i) leverage incrementality to generate fast and accurate congestion maps, and (ii) develop several techniques to relieve cell-based and layout-based congestion. To broaden the scope of routing, we integrate a global router in a chip-design flow that addresses the buffer explosion problem.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/98025/1/jinhu_1.pd
- …