4,276 research outputs found

    Addressing Manufacturing Challenges in NoC-based ULSI Designs

    Full text link
    Hernández Luz, C. (2012). Addressing Manufacturing Challenges in NoC-based ULSI Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1669

    Damage tolerant design of additively manufactured metallic components subjected to cyclic loading:State of the art and challenges

    Get PDF
    none21siUndoubtedly, a better understanding and the further development of approaches for damage tolerant component design of AM parts are among the most significant challenges currently facing the use of these new technologies. This article presents a thorough overview of the workshop discussions. It aims to provide a review of the parameters affecting the damage tolerance of parts produced by additive manufacturing (shortly, AM parts) with special emphasis on the process parameters intrinsic to the AM technologies, the resulting defects and the residual stresses. Based on these aspects, basic concepts are reviewed and critically discussed specifically for AM materials: - Criteria for damage tolerant component design; - Criteria for the determination of fatigue and fracture properties; - Strategies for the determination of the fatigue life in dependence of different manufacturing conditions; - Methods for the quantitative characterization of microstructure and defects; - Methods for the determination of residual stresses; - Effect of the defects and the residual stresses on the fatigue life and behaviour. We see that many of the classic concepts need to be expanded in order to fit with the particular microstructure (grain size and shape, crystal texture) and defect distribution (spatial arrangement, size, shape, amount) present in AM (in particular laser powder bed fusion). For instance, 3D characterization of defects becomes essential, since the defect shapes in AM are diverse and impact the fatigue life in a different way than in the case of conventionally produced components. Such new concepts have immediate consequence on the way one should tackle the determination of the fatigue life of AM parts; for instance, since a classification of defects and a quantification of the tolerable shapes and sizes is still missing, a new strategy must be defined, whereby theoretical calculations (e.g. FEM) allow determining the maximum tolerable defect size, and non-destructive testing (NDT) techniques are required to detect whether such defects are indeed present in the component. Such examples show how component design, damage and failure criteria, and characterization (and/or NDT) become for AM parts fully interlinked. We conclude that the homogenization of these fields represents the current challenge for the engineer and the materials scientist.noneZerbst, Uwe; Bruno, Giovanni; Buffiere, Jean-Yves; Wegener, Thomas; Niendorf, Thomas; Wu, Tao; Zhang, Xiang; Kashaev, Nikolai; Meneghetti, Giovanni; Hrabe, Nik; Madia, Mauro; Werner, Tiago; Hilgenberg, Kai; Koukolíková, Martina; Procházka, Radek; Džugan, Jan; Möller, Benjamin; Beretta, Stefano; Evans, Alexander; Wagener, Rainer; Schnabel, KaiZerbst, Uwe; Bruno, Giovanni; Buffiere, Jean-Yves; Wegener, Thomas; Niendorf, Thomas; Wu, Tao; Zhang, Xiang; Kashaev, Nikolai; Meneghetti, Giovanni; Hrabe, Nik; Madia, Mauro; Werner, Tiago; Hilgenberg, Kai; Koukolíková, Martina; Procházka, Radek; Džugan, Jan; Möller, Benjamin; Beretta, Stefano; Evans, Alexander; Wagener, Rainer; Schnabel, Ka

    Fault- and Yield-Aware On-Chip Memory Design and Management

    Get PDF
    Ever decreasing device size causes more frequent hard faults, which becomes a serious burden to processor design and yield management. This problem is particularly pronounced in the on-chip memory which consumes up to 70% of a processor' s total chip area. Traditional circuit-level techniques, such as redundancy and error correction code, become less effective in error-prevalent environments because of their large area overhead. In this work, we suggest an architectural solution to building reliable on-chip memory in the future processor environment. Our approaches have two parts, a design framework and architectural techniques for on-chip memory structures. Our design framework provides important architectural evaluation metrics such as yield, area, and performance based on low level defects and process variations parameters. Processor architects can quickly evaluate their designs' characteristics in terms of yield, area, and performance. With the framework, we develop architectural yield enhancement solutions for on-chip memory structures including L1 cache, L2 cache and directory memory. Our proposed solutions greatly improve yield with negligible area and performance overhead. Furthermore, we develop a decoupled yield model of compute cores and L2 caches in CMPs, which show that there will be many more L2 caches than compute cores in a chip. We propose efficient utilization techniques for excess caches. Evaluation results show that excess caches significantly improve overall performance of CMPs

    Quantum Computing

    Full text link
    Quantum mechanics---the theory describing the fundamental workings of nature---is famously counterintuitive: it predicts that a particle can be in two places at the same time, and that two remote particles can be inextricably and instantaneously linked. These predictions have been the topic of intense metaphysical debate ever since the theory's inception early last century. However, supreme predictive power combined with direct experimental observation of some of these unusual phenomena leave little doubt as to its fundamental correctness. In fact, without quantum mechanics we could not explain the workings of a laser, nor indeed how a fridge magnet operates. Over the last several decades quantum information science has emerged to seek answers to the question: can we gain some advantage by storing, transmitting and processing information encoded in systems that exhibit these unique quantum properties? Today it is understood that the answer is yes. Many research groups around the world are working towards one of the most ambitious goals humankind has ever embarked upon: a quantum computer that promises to exponentially improve computational power for particular tasks. A number of physical systems, spanning much of modern physics, are being developed for this task---ranging from single particles of light to superconducting circuits---and it is not yet clear which, if any, will ultimately prove successful. Here we describe the latest developments for each of the leading approaches and explain what the major challenges are for the future.Comment: 26 pages, 7 figures, 291 references. Early draft of Nature 464, 45-53 (4 March 2010). Published version is more up-to-date and has several corrections, but is half the length with far fewer reference

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    System-on-Chip design for reliability

    Get PDF

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Evolution of Gold Nanoparticles in Radiation Environments

    Get PDF
    Gold nanoparticles are being explored for several applications in radiation environments, including uses in cancer radiotherapy treatments and advanced satellite or detector applications. In these applications, nanoparticle interactions with energetic neutrons, photons, and charged particles can cause structural damage ranging from single atom displacement events to bulk morphological changes. Due to the diminutive length scales and prodigious surface-to-volume ratios of gold nanoparticles, radiation damage effects are typically dominated by sputtering and surface interactions and can vary drastically from bulk behavior and classical models. Here, we report on contemporary experimental and computational modeling efforts that have contributed to the current understanding of how ionizing radiation environments affect the structure and properties of gold nanoparticles. The future potential for elucidating the active mechanisms in gold nanoparticles exposed to ionizing radiation and the subsequent ability to predictively model the radiation stability and ion beam modification parameters will be discussed
    • …
    corecore