12 research outputs found

    Placement and Routing in 3D Integrated Circuits

    Full text link

    Analysis and Test of the Effects of Single Event Upsets Affecting the Configuration Memory of SRAM-based FPGAs

    Get PDF
    SRAM-based FPGAs are increasingly relevant in a growing number of safety-critical application fields, ranging from automotive to aerospace. These application fields are characterized by a harsh radiation environment that can cause the occurrence of Single Event Upsets (SEUs) in digital devices. These faults have particularly adverse effects on SRAM-based FPGA systems because not only can they temporarily affect the behaviour of the system by changing the contents of flip-flops or memories, but they can also permanently change the functionality implemented by the system itself, by changing the content of the configuration memory. Designing safety-critical applications requires accurate methodologies to evaluate the system’s sensitivity to SEUs as early as possible during the design process. Moreover it is necessary to detect the occurrence of SEUs during the system life-time. To this purpose test patterns should be generated during the design process, and then applied to the inputs of the system during its operation. In this thesis we propose a set of software tools that could be used by designers of SRAM-based FPGA safety-critical applications to assess the sensitivity to SEUs of the system and to generate test patterns for in-service testing. The main feature of these tools is that they implement a model of SEUs affecting the configuration bits controlling the logic and routing resources of an FPGA device that has been demonstrated to be much more accurate than the classical stuck-at and open/short models, that are commonly used in the analysis of faults in digital devices. By keeping this accurate fault model into account, the proposed tools are more accurate than similar academic and commercial tools today available for the analysis of faults in digital circuits, that do not take into account the features of the FPGA technology.. In particular three tools have been designed and developed: (i) ASSESS: Accurate Simulator of SEuS affecting the configuration memory of SRAM-based FPGAs, a simulator of SEUs affecting the configuration memory of an SRAM-based FPGA system for the early assessment of the sensitivity to SEUs; (ii) UA2TPG: Untestability Analyzer and Automatic Test Pattern Generator for SEUs Affecting the Configuration Memory of SRAM-based FPGAs, a static analysis tool for the identification of the untestable SEUs and for the automatic generation of test patterns for in-service testing of the 100% of the testable SEUs; and (iii) GABES: Genetic Algorithm Based Environment for SEU Testing in SRAM-FPGAs, a Genetic Algorithm-based Environment for the generation of an optimized set of test patterns for in-service testing of SEUs. The proposed tools have been applied to some circuits from the ITC’99 benchmark. The results obtained from these experiments have been compared with results obtained by similar experiments in which we considered the stuck-at fault model, instead of the more accurate model for SEUs. From the comparison of these experiments we have been able to verify that the proposed software tools are actually more accurate than similar tools today available. In particular the comparison between results obtained using ASSESS with those obtained by fault injection has shown that the proposed fault simulator has an average error of 0:1% and a maximum error of 0:5%, while using a stuck-at fault simulator the average error with respect of the fault injection experiment has been 15:1% with a maximum error of 56:2%. Similarly the comparison between the results obtained using UA2TPG for the accurate SEU model, with the results obtained for stuck-at faults has shown an average difference of untestability of 7:9% with a maximum of 37:4%. Finally the comparison between fault coverages obtained by test patterns generated for the accurate model of SEUs and the fault coverages obtained by test pattern designed for stuck-at faults, shows that the former detect the 100% of the testable faults, while the latter reach an average fault coverage of 78:9%, with a minimum of 54% and a maximum of 93:16%

    Fixed-outline bus-driven floorplanning.

    Get PDF
    Jiang, Yan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 87-92).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Physical Design --- p.2Chapter 1.2 --- Floorplanning --- p.6Chapter 1.2.1 --- Floorplanning Objectives --- p.7Chapter 1.2.2 --- Common Approaches --- p.8Chapter 1.3 --- Motivations and Contributions --- p.14Chapter 1.4 --- Organization of the Thesis --- p.15Chapter 2 --- Literature Review on BDF --- p.17Chapter 2.1 --- Zero-Bend BDF --- p.17Chapter 2.1.1 --- BDF Using the Sequence-Pair Representation --- p.17Chapter 2.1.2 --- Using B*-Tree and Fast SA --- p.20Chapter 2.2 --- Two-Bend BDF --- p.22Chapter 2.3 --- TCG-Based Multi-Bend BDF --- p.25Chapter 2.3.1 --- Placement Constraints for Bus --- p.26Chapter 2.3.2 --- Bus Ordering --- p.28Chapter 2.4 --- Bus-Pin-Aware BDF --- p.30Chapter 2.5 --- Summary --- p.33Chapter 3 --- Fixed-Outline BDF --- p.35Chapter 3.1 --- Introduction --- p.35Chapter 3.2 --- Problem Formulation --- p.36Chapter 3.3 --- The Overview of Our Approach --- p.36Chapter 3.4 --- Partitioning --- p.37Chapter 3.4.1. --- The Overview of Partitioning --- p.38Chapter 3.4.2 --- Building a Hypergraph G --- p.39Chapter 3.5 --- Floorplaiining with Bus Routing --- p.43Chapter 3.5.1 --- Find Bus Routes --- p.43Chapter 3.5.2 --- Realization of Bus Routes --- p.48Chapter 3.5.3 --- Details of the Annealing Process --- p.50Chapter 3.6 --- Handle Fixed-Outline Constraints --- p.52Chapter 3.7 --- Bus Layout --- p.52Chapter 3.8 --- Experimental Results --- p.56Chapter 3.9 --- Summary --- p.61Chapter 4 --- Fixed-Outline BDF with L-shape bus --- p.63Chapter 4.1 --- Introduction --- p.63Chapter 4.2 --- Problem Formulation --- p.64Chapter 4.3 --- Our Approach --- p.65Chapter 4.3.1 --- Bus Routability Checking --- p.67Chapter 4.3.2 --- Details of the Annealing Process --- p.79Chapter 4.4 --- Experimental Results --- p.79Chapter 4.5 --- Summary --- p.82Chapter 5 --- Conclusion --- p.85Bibliography --- p.9

    Doctor of Philosophy

    Get PDF
    dissertationRecent breakthroughs in silicon photonics technology are enabling the integration of optical devices into silicon-based semiconductor processes. Photonics technology enables high-speed, high-bandwidth, and high-fidelity communications on the chip-scale-an important development in an increasingly communications-oriented semiconductor world. Significant developments in silicon photonic manufacturing and integration are also enabling investigations into applications beyond that of traditional telecom: sensing, filtering, signal processing, quantum technology-and even optical computing. In effect, we are now seeing a convergence of communications and computation, where the traditional roles of optics and microelectronics are becoming blurred. As the applications for opto-electronic integrated circuits (OEICs) are developed, and manufacturing capabilities expand, design support is necessary to fully exploit the potential of this optics technology. Such design support for moving beyond custom-design to automated synthesis and optimization is not well developed. Scalability requires abstractions, which in turn enables and requires the use of optimization algorithms and design methodology flows. Design automation represents an opportunity to take OEIC design to a larger scale, facilitating design-space exploration, and laying the foundation for current and future optical applications-thus fully realizing the potential of this technology. This dissertation proposes design automation for integrated optic system design. Using a buildingblock model for optical devices, we provide an EDA-inspired design flow and methodologies for optical design automation. Underlying these flows and methodologies are new supporting techniques in behavioral and physical synthesis, as well as device-resynthesis techniques for thermal-aware system integration. We also provide modeling for optical devices and determine optimization and constraint parameters that guide the automation techniques. Our techniques and methodologies are then applied to the design and optimization of optical circuits and devices. Experimental results are analyzed to evaluate their efficacy. We conclude with discussions on the contributions and limitations of the approaches in the context of optical design automation, and describe the tremendous opportunities for future research in design automation for integrated optics

    Bus-driven floorplanning.

    Get PDF
    Law Hoi Ying.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 101-106).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- VLSI Design Cycle --- p.2Chapter 1.2 --- Physical Design Cycle --- p.6Chapter 1.3 --- Floorplanning --- p.10Chapter 1.3.1 --- Floorplanning Objectives --- p.11Chapter 1.3.2 --- Common Approaches --- p.12Chapter 1.3.3 --- Interconnect-Driven Floorplanning --- p.14Chapter 1.4 --- Motivations and Contributions --- p.15Chapter 1.5 --- Organization of the Thesis --- p.17Chapter 2 --- Literature Review on 2D Floorplan Representations --- p.18Chapter 2.1 --- Types of Floorplans --- p.18Chapter 2.2 --- Floorplan Representations --- p.20Chapter 2.2.1 --- Slicing Floorplan --- p.21Chapter 2.2.2 --- Non-slicing Floorplan --- p.22Chapter 2.2.3 --- Mosaic Floorplan --- p.30Chapter 2.3 --- Summary --- p.35Chapter 3 --- Literature Review on 3D Floorplan Representations --- p.37Chapter 3.1 --- Introduction --- p.37Chapter 3.2 --- Problem Formulation --- p.38Chapter 3.3 --- Previous Work --- p.38Chapter 3.4 --- Summary --- p.42Chapter 4 --- Literature Review on Bus-Driven Floorplanning --- p.44Chapter 4.1 --- Problem Formulation --- p.44Chapter 4.2 --- Previous Work --- p.45Chapter 4.2.1 --- Abutment Constraint --- p.45Chapter 4.2.2 --- Alignment Constraint --- p.49Chapter 4.2.3 --- Bus-Driven Floorplanning --- p.52Chapter 4.3 --- Summary --- p.53Chapter 5 --- Multi-Bend Bus-Driven Floorplanning --- p.55Chapter 5.1 --- Introduction --- p.55Chapter 5.2 --- Problem Formulation --- p.56Chapter 5.3 --- Methodology --- p.57Chapter 5.3.1 --- Shape Validation --- p.58Chapter 5.3.2 --- Bus Ordering --- p.65Chapter 5.3.3 --- Floorplan Realization --- p.72Chapter 5.3.4 --- Simulated Annealing --- p.73Chapter 5.3.5 --- Soft Block Adjustment --- p.75Chapter 5.4 --- Experimental Results --- p.75Chapter 5.5 --- Summary --- p.77Chapter 6 --- Bus-Driven Floorplanning for 3D Chips --- p.80Chapter 6.1 --- Introduction --- p.80Chapter 6.2 --- Problem Formulation --- p.81Chapter 6.3 --- The Representation --- p.82Chapter 6.3.1 --- Overview --- p.82Chapter 6.3.2 --- Review of TCG --- p.83Chapter 6.3.3 --- Layered Transitive Closure Graph (LTCG) --- p.84Chapter 6.3.4 --- Aligning Blocks --- p.85Chapter 6.3.5 --- Solution Perturbation --- p.87Chapter 6.4 --- Simulated Annealing --- p.92Chapter 6.5 --- Soft Block Adjustment --- p.92Chapter 6.6 --- Experimental Results --- p.93Chapter 6.7 --- Summary --- p.94Chapter 6.8 --- Acknowledgement --- p.95Chapter 7 --- Conclusion --- p.99Bibliography --- p.10

    Intelligent approaches to VLSI routing

    Get PDF
    Very Large Scale Integrated-circuit (VLSI) routing involves many large-size and complex problems and most of them have been shown to be NP-hard or NP-complete. As a result, conventional approaches, which have been successfully used to handle relatively small-size routing problems, are not suitable to be used in tackling large-size routing problems because they lead to \u27combinatorial explosion\u27 in search space. Hence, there is a need for exploring more efficient routing approaches to be incorporated into today\u27s VLSI routing system. This thesis strives to use intelligent approaches, including symbolic intelligence and computational intelligence, to solve three VLSI routing problems: Three-Dimensional (3-D) Shortest Path Connection, Switchbox Routing and Constrained Via Minimization. The 3-D shortest path connection is a fundamental problem in VLSI routing. It aims to connect two terminals of a net that are distributed in a 3-D routing space subject to technological constraints and performance requirements. Aiming at increasing computation speed and decreasing storage space requirements, we present a new A* algorithm for the 3-D shortest path connection problem in this thesis. This new A*algorithm uses an economical representation and adopts a novel back- trace technique. It is shown that this algorithm can guarantee to find a path if one exists and the path found is the shortest one. In addition, its computation speed is fast, especially when routed nets are spare. The computational complexities of this A* algorithm at the best case and the worst case are O(Ɩ) and 0(Ɩ3), respectively, where Ɩ is the shortest path length between the two terminals. Most importantly, this A\u27 algorithm is superior to other shortest path connection algorithms as it is economical in terms of storage space requirement, i.e., 1 bit/grid. The switchbox routing problem aims to connect terminals at regular intervals on the four sides of a rectangle routing region. From a computational point of view, the problem is NP-hard. Furthermore, it is extremely complicated and as the consequence no existing algorithm can guarantee to find a solution even if one exists no matter how high the complexity of the algorithm is. Previous approaches to the switch box routing problem can be divided into algorithmic approaches and knowledge-based approaches. The algorithmic approaches are efficient in computational time, but they are unsucessful at achieving high routing completion rate, especially for some dense and complicated switchbox routing problems. On the other hand, the knowledge-based approaches can achieve high routing completion rate, but they are not efficient in computation speed. In this thesis we present a hybrid approach to the switchbox routing problem. This hybrid approach is based on a new knowledge-based routing technique, namely synchronized routing, and combines some efficient algorithmic routing techniques. Experimental results show it can achieve the high routing completion rate of the knowledge-based approaches and the high efficiency of the algorithmic approaches. The constrained via minimization is an important optimization problem in VLSI routing. Its objective is to minimize the number of vias introduced in VLSI routing. From computational perspective, the constrained via minimization is NP-complete. Although for a special case where the number of wire segments splits at a via candidate is not more than three, elegant theoretical results have been obtained. For a general case in which there exist more than three wire segment splits at a via candidate few approaches have been proposed, and those approaches are only suitable for tackling some particular routing styles and are difficult or impossible to adjust to meet practical requirements. In this thesis we propose a new graph-theoretic model, namely switching graph model, for the constrained via minimization problem. The switching graph model can represent both grid-based and grid less routing problems, and allows arbitrary wire segments split at a via candidate. Then on the basis of the model, we present the first genetic algorithm for the constrained via minimization problem. This genetic algorithm can tackle various kinds of routing styles and be configured to meet practical constraints. Experimental results show that the genetic algorithm can find the optimal solutions for most cases in reasonable time

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Design automation and analysis of three-dimensional integrated circuits

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 165-176).This dissertation concerns the design of circuits and systems for an emerging technology known as three-dimensional integration. By stacking individual components, dice, or whole wafers using a high-density electromechanical interconnect, three-dimensional integration can achieve scalability and performance exceeding that of conventional fabrication technologies. There are two main contributions of this thesis. The first is a computer-aided design flow for the digital components of a three-dimensional integrated circuit (3-D IC). This flow primarily consists of two software tools: PR3D, a placement and routing tool for custom 3-D ICs based on standard cells, and 3-D Magic, a tool for designing, editing, and testing physical layout characteristics of 3-D ICs. The second contribution of this thesis is a performance analysis of the digital components of 3-D ICs. We use the above tools to determine the extent to which 3-D integration can improve timing, energy, and thermal performance. In doing so, we verify the estimates of stochastic computational models for 3-D IC interconnects and find that the models predict the optimal 3-D wire length to within 20% accuracy. We expand upon this analysis by examining how 3-D technology factors affect the optimal wire length that can be obtained. Our ultimate analysis extends this work by directly considering timing and energy in 3-D ICs. In all cases we find that significant performance improvements are possible. In contrast, thermal performance is expected to worsen with the use of 3-D integration. We examine precisely how thermal behavior scales in 3-D integration and determine quantitatively how the temperature may be controlled during the circuit placement process. We also show how advanced packaging(cont.) technologies may be leveraged to maintain acceptable die temperatures in 3-D ICs. Finally, we explore two issues for the future of 3-D integration. We determine how technology scaling impacts the effect of 3-D integration on circuit performance. We also consider how to improve the performance of digital components in a mixed-signal 3-D integrated circuit. We conclude with a look towards future 3-D IC design tools.by Shamik Das.Ph.D

    High level compilation for gate reconfigurable architectures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 205-215).A continuing exponential increase in the number of programmable elements is turning management of gate-reconfigurable architectures as "glue logic" into an intractable problem; it is past time to raise this abstraction level. The physical hardware in gate-reconfigurable architectures is all low level - individual wires, bit-level functions, and single bit registers - hence one should look to the fetch-decode-execute machinery of traditional computers for higher level abstractions. Ordinary computers have machine-level architectural mechanisms that interpret instructions - instructions that are generated by a high-level compiler. Efficiently moving up to the next abstraction level requires leveraging these mechanisms without introducing the overhead of machine-level interpretation. In this dissertation, I solve this fundamental problem by specializing architectural mechanisms with respect to input programs. This solution is the key to efficient compilation of high-level programs to gate reconfigurable architectures. My approach to specialization includes several novel techniques. I develop, with others, extensive bitwidth analyses that apply to registers, pointers, and arrays. I use pointer analysis and memory disambiguation to target devices with blocks of embedded memory. My approach to memory parallelization generates a spatial hierarchy that enables easier-to-synthesize logic state machines with smaller circuits and no long wires.(cont.) My space-time scheduling approach integrates the techniques of high-level synthesis with the static routing concepts developed for single-chip multiprocessors. Using DeepC, a prototype compiler demonstrating my thesis, I compile a new benchmark suite to Xilinx Virtex FPGAs. Resulting performance is comparable to a custom MIPS processor, with smaller area (40 percent on average), higher evaluation speeds (2.4x), and lower energy (18x) and energy-delay (45x). Specialization of advanced mechanisms results in additional speedup, scaling with hardware area, at the expense of power. For comparison, I also target IBM's standard cell SA-27E process and the RAW microprocessor. Results include sensitivity analysis to the different mechanisms specialized and a grand comparison between alternate targets.by Jonathan William Babb.Ph.D
    corecore