19 research outputs found
A Metaheuristic Method for Fast Multi-Deck Legalization
Department of Electrical EngineeringIn the field of circuit design, decreasing the transistor size is getting harder and harder. Hence, improving the circuit performance also becoming difficult. For the better circuit performance, various technologies are being tired and multi-deck standard cell technology is one of them. The standard cell methodology is a fundamental structure of EDA (Electric Design Automation). Using the standard cell library, EDA tools can easily design, and optimize the physical design of chips.
In order to conventional standard cell, multi-deck standard cell occupies multiple rows on the chip. This multiple occupation increases complexity of the circuit physical design for EDA tools. Thus, legalization problem has become more challenging for the multi-deck standard cells. Recently, various multi-deck legalization methods are proposed because the conventional single-deck legalization method is not effective for multi-deck legalization. A state-of-the-arts legalization method is based on quadratic programming with the linear complementary problem(LCP). However, these previous researches can only cover the double-deck case because of runtime burden.
In this thesis, we propose the fast and enhanced the multi-deck standard cell legalization algorithm which can handle higher than double-deck standard cell cases. The proposed legalization method achieves the most fastest runtime result for the dominant number of benchmarks on ICCAD Contest 2017 [1] compared with Top 3 results.ope
Legalization heuristics for physical design
Σημείωση: διατίθεται συμπληρωματικό υλικό σε ξεχωριστό αρχείο
How Fast Can We Play Tetris Greedily With Rectangular Pieces?
Consider a variant of Tetris played on a board of width and infinite
height, where the pieces are axis-aligned rectangles of arbitrary integer
dimensions, the pieces can only be moved before letting them drop, and a row
does not disappear once it is full. Suppose we want to follow a greedy
strategy: let each rectangle fall where it will end up the lowest given the
current state of the board. To do so, we want a data structure which can always
suggest a greedy move. In other words, we want a data structure which maintains
a set of rectangles, supports queries which return where to drop the
rectangle, and updates which insert a rectangle dropped at a certain position
and return the height of the highest point in the updated set of rectangles. We
show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on
a board of width , if the OMv conjecture [Henzinger et al., 2015]
is true, then both operations cannot be supported in time
simultaneously. The reduction also implies polynomial bounds from the 3-SUM
conjecture and the APSP conjecture. On the other hand, we show that there is a
data structure supporting both operations in time on
boards of width , matching the lower bound up to a factor.Comment: Correction of typos and other minor correction
Recommended from our members
Nanometer VLSI placement and optimization for multi-objective design closure
In a VLSI physical synthesis flow, placement directly defines the interconnection,
which affects many other design objectives, such as timing, power consumption,
congestion, and thermal issues. With the scaling of technology, the relative interconnect
delay increases dramatically. As a result, placement has become a bottleneck
in deep sub-micron physical synthesis. In this dissertation, I propose several
optimization algorithms from global placement, placement migration, timing driven
placements, to incremental power optimizations for multi-objective VLSI design
closure. The first work is DPlace, a new global placement algorithm that scales
well to the modern large-scale circuit placement problems. DPlace simulates the
natural diffusion process to spread cells smoothly over the placement region, and
uses both analytical and discrete techniques to improve the wire length. However,
global placement is never sufficient for multi-objective design closure, a variety of
design objectives have to be improved incrementally, such as timing, routing congestion,
signal integrity, and heat distribution. Placement migration is a critical step
to address the cell overlaps appearing during incremental optimizations. To achieve
high placement stability, I propose a computational geometry based placement migration
flow to cope with placement changes, and a new stability metric to measure
the “similarity” between two placements accurately. Our placement migration algorithm
has clear advantage over conventional legalization algorithms such that the
neighborhood characteristics of the original placement are preserved. For timing
closure in high performance designs, I present a linear programming based incremental
timing driven placement to improve the timing on critical paths directly.
I further present an efficient timing driven placement algorithm (Pyramids). Two
formulations of Pyramids are proposed, which are suitable for different optimization
stages in a physical synthesis flow. Both approaches find the optimal location
for timing of a cell in constant time, through computational geometry based approaches.
For fast convergence of design closure, placement should be integrated
with other optimization techniques. I propose to combine placement, gate sizing
and Vt swapping techniques to reduce the total power consumption, especially the
leakage power, which is becoming increasingly critical for nanometer VLSI design
closure.Electrical and Computer Engineerin
High-Performance Placement and Routing for the Nanometer Scale.
Modern semiconductor manufacturing facilitates single-chip electronic systems that only five years ago required ten to twenty chips. Naturally, design complexity has grown within this period. In contrast to this growth, it is becoming common in the industry to limit design team size which places a heavier burden on design automation tools.
Our work identifies new objectives, constraints and concerns in the physical design of systems-on-chip, and develops new computational techniques to address them. In addition to faster and more relevant design optimizations, we demonstrate that traditional design flows based on ``separation of concerns'' produce unnecessarily suboptimal layouts. We develop new integrated optimizations that streamline traditional chains of loosely-linked design tools. In particular, we bridge the gap between mixed-size placement and routing by updating the objective of global and detail placement to a more accurate estimate of routed wirelength. To this we add sophisticated whitespace allocation, and the combination provides increased routability, faster routing,
shorter routed wirelength, and the best via counts of published techniques. To further improve post-routing design metrics, we present new global routing techniques based on Discrete Lagrange Multipliers (DLM) which produce the best routed wirelength results on recent benchmarks. Our work culminates in the integration of our routing techniques within an incremental placement flow to
improve detailed routing solutions, shrink die sizes and reduce total chip cost.
Not only do our techniques improve the quality and cost of designs, but also simplify design automation software implementation in many cases. Ultimately, we reduce the time needed for design closure through improved tool fidelity and the use of our incremental techniques for placement and routing.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64639/1/royj_1.pd
On the Use of Directed Moves for Placement in VLSI CAD
Search-based placement methods have long been used for placing integrated circuits targeting the field programmable gate array (FPGA) and standard cell design styles. Such methods offer the potential for high-quality solutions but often come at the cost of long run-times compared to alternative methods.
This dissertation examines strategies for enhancing local search heuristics---and in particular, simulated annealing---through the application of directed moves. These moves help to guide a search-based optimizer by focusing efforts on states which are most likely to yield productive improvement, effectively pruning the size of the search space.
The engineering theory and implementation details of directed moves are discussed in the context of both field programmable gate array and standard cell designs. This work explores the ways in which such moves can be used to improve the quality of FPGA placements, improve the robustness of floorplan repair and legalization methods for mixed-size standard cell designs, and enhance the quality of detailed placement for standard cell circuits. The analysis presented herein confirms the validity and efficacy of directed moves, and supports the use of such heuristics within various optimization frameworks
Biohacking and code convergence : a transductive ethnography
Cette thèse se déploie dans un espace de discours et de pratiques revendicatrices, à l’inter- section des cultures amateures informatiques et biotechniques, euro-américaines contempo- raines. La problématique se dessinant dans ce croisement culturel examine des métaphores et analogies au coeur d’un traffic intense, au milieu de voies de commmunications imposantes, reliant les technologies informatiques et biotechniques comme lieux d’expression médiatique. L’examen retrace les lignes de force, les médiations expressives en ces lieux à travers leurs manifestations en tant que codes —à la fois informatiques et génétiques— et reconnaît les caractères analogiques d’expressivité des codes en tant que processus de convergence.
Émergeant lentement, à partir des années 40 et 50, les visions convergentes des codes ont facilité l’entrée des ordinateurs personnels dans les marchés, ainsi que dans les garages de hackers, alors que des bricoleurs de l’informatique s’en réclamaient comme espace de liberté d’information —et surtout d’innovation. Plus de cinquante ans plus tard, l’analogie entre codes informatiques et génétiques sert de moteur aux revendications de liberté, informant cette fois les nouvelles applications de la biotechnologie de marché, ainsi que l’activité des biohackers, ces bricoleurs de garage en biologie synthétique. Les pratiques du biohacking sont ainsi comprises comme des individuations : des tentatives continues de résoudre des frictions, des tensions travaillant les revendications des cultures amateures informatiques et biotechniques.
Une des manières de moduler ces tensions s’incarne dans un processus connu sous le nom de forking, entrevu ici comme l’expérience d’une bifurcation. Autrement dit, le forking est ici définit comme passage vers un seuil critique, déclinant la technologie et la biologie sur plusieurs modes. Le forking informe —c’est-à-dire permet et contraint— différentes vi- sions collectives de l’ouverture informationnelle. Le forking intervient aussi sur les plans des iii semio-matérialités et pouvoirs d’action investis dans les pratiques biotechniques et informa- tiques. Pris comme processus de co-constitution et de différentiation de l’action collective, les mouvements de bifurcation invitent les trois questions suivantes : 1) Comment le forking catalyse-t-il la solution des tensions participant aux revendications des pratiques du bioha- cking ? 2) Dans ce processus de solution, de quelles manières les revendications changent de phase, bifurquent et se transforment, parfois au point d’altérer radicalement ces pratiques ? 3) Quels nouveaux problèmes émergent de ces solutions ?
L’effort de recherche a trouvé ces questions, ainsi que les plans correspondants d’action sémio-matérielle et collective, incarnées dans trois expériences ethnographiques réparties sur trois ans (2012-2015) : la première dans un laboratoire de biotechnologie communautaire new- yorkais, la seconde dans l’émergence d’un groupe de biotechnologie amateure à Montréal, et la troisième à Cork, en Irlande, au sein du premier accélérateur d’entreprises en biologie synthétique au monde. La logique de l’enquête n’est ni strictement inductive ou déductive, mais transductive. Elle emprunte à la philosophie de la communication et de l’information de Gilbert Simondon et découvre l’épistémologie en tant qu’acte de création opérant en milieux relationnels. L’heuristique transductive offre des rencontres inusitées entre les métaphores et les analogies des codes. Ces rencontres étonnantes ont aménagé l’expérience de la conver- gence des codes sous forme de jeux d’écritures. Elles se sont retrouvées dans la recherche ethnographique en tant que processus transductifs.This dissertation examines creative practices and discourses intersecting computer and biotech cultures. It queries influential metaphors and analogies on both sides of the inter- section, and their positioning of biotech and information technologies as expression media. It follows mediations across their incarnations as codes, both computational and biological, and situates their analogical expressivity and programmability as a process of code conver- gence. Converging visions of technological freedom facilitated the entrance of computers in 1960’s Western hobbyist hacker circles, as well as in consumer markets. Almost fifty years later, the analogy drives claims to freedom of information —and freedom of innovation— from biohacker hobbyist groups to new biotech consumer markets. Such biohacking practices are understood as individuations: as ongoing attempts to resolve frictions, tensions working through claims to freedom and openness animating software and biotech cultures.
Tensions get modulated in many ways. One of them, otherwise known as “forking,” refers here to a critical bifurcation allowing for differing iterations of biotechnical and computa- tional configurations. Forking informs —that is, simultaneously affords and constrains— differing collective visions of openness. Forking also operates on the materiality and agency invested in biotechnical and computational practices. Taken as a significant process of co- constitution and differentiation in collective action, bifurcation invites the following three questions: 1) How does forking solve tensions working through claims to biotech freedom? 2) In this solving process, how can claims bifurcate and transform to the point of radically altering biotech practices? 3) what new problems do these solutions call into existence?
This research found these questions, and both scales of material action and agency, in- carnated in three extensive ethnographical journeys spanning three years (2012-2015): the first in a Brooklyn-based biotech community laboratory, the second in the early days of a biotech community group in Montreal, and the third in the world’s first synthetic biology startup accelerator in Cork, Ireland. The inquiry’s guiding empirical logic is neither solely deductive or inductive, but transductive. It borrows from Gilbert Simondon’s philosophy of communication and information to experience epistemology as an act of analogical creation involving the radical, irreversible transformation of knower and known. Transductive heuris- tics offer unconvential encounters with practices, metaphors and analogies of code. In the end, transductive methods acknowledge code convergence as a metastable writing games, and ethnographical research itself as a transductive process
E-learning for lifelong learning in Latvia
This White Paper on e-Learning for Lifelong Learning in Latvia is one among a number of white papers dealing with e-Learning and lifelong learning in specific countries
in Asia and Europe. The production of these white papers is an Asian-European
initiative, with offspring in the e-ASEM network ― the research network on the
Development of ICT skills, e-Learning and the culture of e-Learning in Lifelong
Learning ― under the ASEM Education and Research Hub for Lifelong Learning.
The aim of the White Paper article is to explore the concept of e-learning and lifelong learning in the context of Latvia taking into account the relevant government policy, regulations and financing issues