422 research outputs found

    Infrastructures and Algorithms for Testable and Dependable Systems-on-a-Chip

    Get PDF
    Every new node of semiconductor technologies provides further miniaturization and higher performances, increasing the number of advanced functions that electronic products can offer. Silicon area is now so cheap that industries can integrate in a single chip usually referred to as System-on-Chip (SoC), all the components and functions that historically were placed on a hardware board. Although adding such advanced functionality can benefit users, the manufacturing process is becoming finer and denser, making chips more susceptible to defects. Today’s very deep-submicron semiconductor technologies (0.13 micron and below) have reached susceptibility levels that put conventional semiconductor manufacturing at an impasse. Being able to rapidly develop, manufacture, test, diagnose and verify such complex new chips and products is crucial for the continued success of our economy at-large. This trend is expected to continue at least for the next ten years making possible the design and production of 100 million transistor chips. To speed up the research, the National Technology Roadmap for Semiconductors identified in 1997 a number of major hurdles to be overcome. Some of these hurdles are related to test and dependability. Test is one of the most critical tasks in the semiconductor production process where Integrated Circuits (ICs) are tested several times starting from the wafer probing to the end of production test. Test is not only necessary to assure fault free devices but it also plays a key role in analyzing defects in the manufacturing process. This last point has high relevance since increasing time-to-market pressure on semiconductor fabrication often forces foundries to start volume production on a given semiconductor technology node before reaching the defect densities, and hence yield levels, traditionally obtained at that stage. The feedback derived from test is the only way to analyze and isolate many of the defects in today’s processes and to increase process’s yield. With the increasing need of high quality electronic products, at each new physical assembly level, such as board and system assembly, test is used for debugging, diagnosing and repairing the sub-assemblies in their new environment. Similarly, the increasing reliability, availability and serviceability requirements, lead the users of high-end products performing periodic tests in the field throughout the full life cycle. To allow advancements in each one of the above scaling trends, fundamental changes are expected to emerge in different Integrated Circuits (ICs) realization disciplines such as IC design, packaging and silicon process. These changes have a direct impact on test methods, tools and equipment. Conventional test equipment and methodologies will be inadequate to assure high quality levels. On chip specialized block dedicated to test, usually referred to as Infrastructure IP (Intellectual Property), need to be developed and included in the new complex designs to assure that new chips will be adequately tested, diagnosed, measured, debugged and even sometimes repaired. In this thesis, some of the scaling trends in designing new complex SoCs will be analyzed one at a time, observing their implications on test and identifying the key hurdles/challenges to be addressed. The goal of the remaining of the thesis is the presentation of possible solutions. It is not sufficient to address just one of the challenges; all must be met at the same time to fulfill the market requirements

    Yield Analysis on Embedded Memory with Redundancy

    Get PDF

    Strategies for Optimising DRAM Repair

    Get PDF
    Dynamic Random Access Memories (DRAM) are large complex devices, prone to defects during manufacture. Yield is improved by the provision of redundant structures used to repair these defects. This redundancy is often implemented by the provision of excess memory capacity and programmable address logic allowing the replacement of faulty cells within the memory array. As the memory capacity of DRAM devices has increased, so has the complexity of their redundant structures, introducing increasingly complex restrictions and interdependencies upon the use of this redundant capacity. Currently redundancy analysis algorithms solving the problem of optimally allocating this redundant capacity must be manually customised for each new device. Compromises made to reduce the complexity, and human error, reduce the efficacy of these algorithms. This thesis develops a methodology for automating the customisation of these redundancy analysis algorithms. Included are: a modelling language describing the redundant structures (including the restrictions and interdependencies placed upon their use), algorithms manipulating this model to generate redundancy analysis algorithms, and methods for translating those algorithms into executable code. Finally these concepts are used to develop a prototype software tool capable of generating redundancy analysis algorithms customised for a specified device

    Suffolk Journal, vol. 81, no. 4, 10/4/2017

    Get PDF
    https://dc.suffolk.edu/journal/1652/thumbnail.jp

    Merging Data Sources to Predict Remaining Useful Life – An Automated Method to Identify Prognostic Parameters

    Get PDF
    The ultimate goal of most prognostic systems is accurate prediction of the remaining useful life (RUL) of individual systems or components based on their use and performance. This class of prognostic algorithms is termed Degradation-Based, or Type III Prognostics. As equipment degrades, measured parameters of the system tend to change; these sensed measurements, or appropriate transformations thereof, may be used to characterize degradation. Traditionally, individual-based prognostic methods use a measure of degradation to make RUL estimates. Degradation measures may include sensed measurements, such as temperature or vibration level, or inferred measurements, such as model residuals or physics-based model predictions. Often, it is beneficial to combine several measures of degradation into a single parameter. Selection of an appropriate parameter is key for making useful individual-based RUL estimates, but methods to aid in this selection are absent in the literature. This dissertation introduces a set of metrics which characterize the suitability of a prognostic parameter. Parameter features such as trendability, monotonicity, and prognosability can be used to compare candidate prognostic parameters to determine which is most useful for individual-based prognosis. Trendability indicates the degree to which the parameters of a population of systems have the same underlying shape. Monotonicity characterizes the underlying positive or negative trend of the parameter. Finally, prognosability gives a measure of the variance in the critical failure value of a population of systems. By quantifying these features for a given parameter, the metrics can be used with any traditional optimization technique, such as Genetic Algorithms, to identify the optimal parameter for a given system. An appropriate parameter may be used with a General Path Model (GPM) approach to make RUL estimates for specific systems or components. A dynamic Bayesian updating methodology is introduced to incorporate prior information in the GPM methodology. The proposed methods are illustrated with two applications: first, to the simulated turbofan engine data provided in the 2008 Prognostics and Health Management Conference Prognostics Challenge and, second, to data collected in a laboratory milling equipment wear experiment. The automated system was shown to identify appropriate parameters in both situations and facilitate Type III prognostic model development

    Study of a Redundant Memory Repair Algorithm

    Get PDF

    Integrating IVHM and Asset Design

    Get PDF
    Integrated Vehicle Health Management (IVHM) describes a set of capabilities that enable effective and efficient maintenance and operation of the target vehicle. It accounts for the collection of data, conducting analysis, and supporting the decision-making process for sustainment and operation. The design of IVHM systems endeavours to account for all causes of failure in a disciplined, systems engineering, manner. With industry striving to reduce through-life cost, IVHM is a powerful tool to give forewarning of impending failure and hence control over the outcome. Benefits have been realised from this approach across a number of different sectors but, hindering our ability to realise further benefit from this maturing technology, is the fact that IVHM is still treated as added on to the design of the asset, rather than being a sub-system in its own right, fully integrated with the asset design. The elevation and integration of IVHM in this way will enable architectures to be chosen that accommodate health ready sub-systems from the supply chain and design trade-offs to be made, to name but two major benefits. Barriers to IVHM being integrated with the asset design are examined in this paper. The paper presents progress in overcoming them, and suggests potential solutions for those that remain. It addresses the IVHM system design from a systems engineering perspective and the integration with the asset design will be described within an industrial design process
    • …
    corecore