9 research outputs found

    공정변이를 고려한 3차원 집적 회로 설계 및 패키징 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 김태환.As CMOS scaling down, The control of variation in chip performance (i.e. speed and power) becomes highly important to improve the chip yield. The increased variation of chip performance demands additional design efforts such as the increase of guard-band or longer design turnaround time (TAT), which cause degradation of both chip performance and economic profit. Meanwhile, through-silicon via (TSV) based 3D technology has been regarded as the promising solution for long interconnect wire and huge die size problem. Since a 3D IC is manufactured by stacking multiple dies which are fabricated in different wafers, integration of the dies that have far different process characteristic can enlarge the difference of device performance on different dies within a single chip. In this dissertation, we analyze the effect of on-package (within-chip) variation on 3D IC and presents effective methods to mitigate the onpackage variation. First, a parametric yield improvement method is presented to resolve the mismatches of dies having different process characteristic. Comprehensive 3D integration algorithms considering post-silicon tuning technique is developed for the multi-layered 3D IC. Then, we show that a careful clock edge embedding in 3D clock tree can greatly reduce the impact of on-package variation on 3D clock skew and propose a two-step solution for the problem of on-package variation-aware layer embedding in 3D clock tree synthesis. In summary, this dissertation presents effective 3D integration method and 3D clock tree synthesis algorithm for process-variation tolerant 3D IC designs.Abstract i Contents ii List of Figures iv List of Tables vii 1 Introduction 1 1.1 Process Variation in 3D ICs . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions of This Dissertation . . . . . . . . . . . . . . . . . . . 6 2 Post-silicon Tuning Aware Die/WaferMatching Algorithms for Enhancing Parametric Yield of 3D IC Design 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 The Die-to-Die Matching Problem and Proposed Algorithm Considering Body Biasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Motivation and Problem Definition . . . . . . . . . . . . . . 13 2.3.2 The Proposed Die-to-Die Matching Algorithm . . . . . . . . 15 2.4 TheWafer-to-Wafer Matching Problem and Proposed Algorithm Considering Body Biasing . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.1 Problem Definition and The Proposed Wafer-to-Wafer Matching Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3 Edge Layer Embedding Algorithm for Mitigating On-Package Variation in 3D Clock Tree Synthesis 32 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2 Problem Definitions and Motivation . . . . . . . . . . . . . . . . . . 35 3.3 The Proposed Algorithm for On-Package Variation Aware Edge Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Algorithm for Maximizing Layer Sharing of Edges . . . . . . 39 3.3.2 Refinement: Partial Edge Embedding on Layers . . . . . . . . 47 3.3.3 Clock Tree Routing and Buffer Insertion . . . . . . . . . . . . 49 3.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4 Conclusion 64 4.1 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.2 Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Abstract in Korean 72Docto

    Integrating specification and test requirements as constraints in verification strategies for 2D and 3D analog and mixed signal designs

    Get PDF
    Analog and Mixed Signal (AMS) designs are essential components of today’s modern Integrated Circuits (ICs) used in the interface between real world signals and the digital world. They present, however, significant verification challenges. Out-of-specification failures in these systems have steadily increased, and have reached record highs in recent years. Increasing design complexity, incomplete/wrong specifications (responsible for 47% of all non functional ICs) as well as additional challenges faced when testing these systems are obvious reasons. A particular example is the escalating impact of realistic test conditions with respect to physical (interface between the device under test (DUT) and the test instruments, input-signal conditions, input impedance, etc.), functional (noise, jitter) and environmental (temperature) constraints. Unfortunately, the impact of such constraints could result in a significant loss of performance and design failure even if the design itself was flawless. Current industrial verification methodologies, each addressing specific verification challenges, have been shown to be useful for detecting and eliminating design failures. Nevertheless, decreases in first pass silicon success rates illustrate the lack of cohesive, efficient techniques to allow a predictable verification process that leads to the highest possible confidence in the correctness of AMS designs. In this PhD thesis, we propose a constraint-driven verification methodology for monitoring specifications of AMS designs. The methodology is based on the early insertion of test(s) associated with each design specification. It exploits specific constraints introduced by these planned tests as well as by the specifications themselves, as they are extracted and used during the verification process, thus reducing the risk of costly errors caused by incomplete, ambiguous or missing details in the specification documents. To fully analyze the impact of these constraints on the overall AMS design behavior, we developed a two-phase algorithm that automatically integrates them into the AMS design behavioral model and performs the specifications monitoring in a Matlab simulation environment. The effectiveness of this methodology is demonstrated for two-dimensional (2D) and three-dimensional (3D) ICs. Our results show that our approach can predict out-of-specification failures, corner cases that were not covered using previous verification methodologies. On one hand, we show that specifications satisfied without specification and test-related constraints have failed in the presence of these additional constraints. On the other hand, we show that some specifications may degrade or even cannot be verified without adding specific specification and test-related constraints

    Architecting Data Centers for High Efficiency and Low Latency

    Full text link
    Modern data centers, housing remarkably powerful computational capacity, are built in massive scales and consume a huge amount of energy. The energy consumption of data centers has mushroomed from virtually nothing to about three percent of the global electricity supply in the last decade, and will continuously grow. Unfortunately, a significant fraction of this energy consumption is wasted due to the inefficiency of current data center architectures, and one of the key reasons behind this inefficiency is the stringent response latency requirements of the user-facing services hosted in these data centers such as web search and social networks. To deliver such low response latency, data center operators often have to overprovision resources to handle high peaks in user load and unexpected load spikes, resulting in low efficiency. This dissertation investigates data center architecture designs that reconcile high system efficiency and low response latency. To increase the efficiency, we propose techniques that understand both microarchitectural-level resource sharing and system-level resource usage dynamics to enable highly efficient co-locations of latency-critical services and low-priority batch workloads. We investigate the resource sharing on real-system simultaneous multithreading (SMT) processors to enable SMT co-locations by precisely predicting the performance interference. We then leverage historical resource usage patterns to further optimize the task scheduling algorithm and data placement policy to improve the efficiency of workload co-locations. Moreover, we introduce methodologies to better manage the response latency by automatically attributing the source of tail latency to low-level architectural and system configurations in both offline load testing environment and online production environment. We design and develop a response latency evaluation framework at microsecond-level precision for data center applications, with which we construct statistical inference procedures to attribute the source of tail latency. Finally, we present an approach that proactively enacts carefully designed causal inference micro-experiments to diagnose the root causes of response latency anomalies, and automatically correct them to reduce the response latency.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144144/1/yunqi_1.pd

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Quantifying and Coping with Parametric Variations in 3D-Stacked Microarchitectures

    No full text
    Variability in device characteristics, i.e., parametric variations, is an important problem for shrinking process technologies. They manifest themselves as variations in performance, power consumption, and reduction in reliability in the manufactured chips as well as low yield levels. Their implications on performance and yield are particularly profound on 3D architectures: a defect on even a single layer can render the entire stack useless. In this paper, we show that instead of causing increased yield losses, we can actually exploit 3D technology to reduce yield losses by intelligently devising the architectures. We take advantage of the layer-to-layer variations to reduce yield losses by splitting critical components among multiple layers. Our results indicate that our proposed method achieves a 30.6 % lower yield loss rate compared to the same pipeline implemented on a 2D architecture

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    Psr1p interacts with SUN/sad1p and EB1/mal3p to establish the bipolar spindle

    Get PDF
    Regular Abstracts - Sunday Poster Presentations: no. 382During mitosis, interpolar microtubules from two spindle pole bodies (SPBs) interdigitate to create an antiparallel microtubule array for accommodating numerous regulatory proteins. Among these proteins, the kinesin-5 cut7p/Eg5 is the key player responsible for sliding apart antiparallel microtubules and thus helps in establishing the bipolar spindle. At the onset of mitosis, two SPBs are adjacent to one another with most microtubules running nearly parallel toward the nuclear envelope, creating an unfavorable microtubule configuration for the kinesin-5 kinesins. Therefore, how the cell organizes the antiparallel microtubule array in the first place at mitotic onset remains enigmatic. Here, we show that a novel protein psrp1p localizes to the SPB and plays a key role in organizing the antiparallel microtubule array. The absence of psr1+ leads to a transient monopolar spindle and massive chromosome loss. Further functional characterization demonstrates that psr1p is recruited to the SPB through interaction with the conserved SUN protein sad1p and that psr1p physically interacts with the conserved microtubule plus tip protein mal3p/EB1. These results suggest a model that psr1p serves as a linking protein between sad1p/SUN and mal3p/EB1 to allow microtubule plus ends to be coupled to the SPBs for organization of an antiparallel microtubule array. Thus, we conclude that psr1p is involved in organizing the antiparallel microtubule array in the first place at mitosis onset by interaction with SUN/sad1p and EB1/mal3p, thereby establishing the bipolar spindle.postprin

    Removal of antagonistic spindle forces can rescue metaphase spindle length and reduce chromosome segregation defects

    Get PDF
    Regular Abstracts - Tuesday Poster Presentations: no. 1925Metaphase describes a phase of mitosis where chromosomes are attached and oriented on the bipolar spindle for subsequent segregation at anaphase. In diverse cell types, the metaphase spindle is maintained at a relatively constant length. Metaphase spindle length is proposed to be regulated by a balance of pushing and pulling forces generated by distinct sets of spindle microtubules and their interactions with motors and microtubule-associated proteins (MAPs). Spindle length appears important for chromosome segregation fidelity, as cells with shorter or longer than normal metaphase spindles, generated through deletion or inhibition of individual mitotic motors or MAPs, showed chromosome segregation defects. To test the force balance model of spindle length control and its effect on chromosome segregation, we applied fast microfluidic temperature-control with live-cell imaging to monitor the effect of switching off different combinations of antagonistic forces in the fission yeast metaphase spindle. We show that spindle midzone proteins kinesin-5 cut7p and microtubule bundler ase1p contribute to outward pushing forces, and spindle kinetochore proteins kinesin-8 klp5/6p and dam1p contribute to inward pulling forces. Removing these proteins individually led to aberrant metaphase spindle length and chromosome segregation defects. Removing these proteins in antagonistic combination rescued the defective spindle length and, in some combinations, also partially rescued chromosome segregation defects. Our results stress the importance of proper chromosome-to-microtubule attachment over spindle length regulation for proper chromosome segregation.postprin
    corecore