222 research outputs found

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Block-level test scheduling under power dissipation constraints

    Get PDF
    As dcvicc technologies such as VLSI and Multichip Module (MCM) become mature, and larger and denser memory ICs arc implemented for high-performancc digital systems, power dissipation becomes a critical factor and can no longer be ignored cither in normal operation of the system or under test conditions. One of the major considerations in test scheduling is the fact that heat dissipated during test application is significantly higher than during normal operation (sometimes 100 - 200% higher). Therefore, this is one of the recent major considerations in test scheduling. Test scheduling is strongly related to test concurrency. Test concurrency is a design property which strongly impacts testability and power dissipation. To satisfy high fault coverage goals with reduced test application time under certain power dissipation constraints, the testing of all components on the system should be performed m parallel to the greatest extent possible. Some theoretical analysis of this problem has been carried out, but only at IC level. The problem was basically described as a compatible test clustering, where the compatibility among tests was given by test resource and power dissipation conflicts at the same time. From an implementation point of view this problem was identified as an Non-Polynomial (NP) complete problem In this thesis, an efficient scheme for overlaying the block-tcsts, called the extended tree growing technique, is proposed together with classical scheduling algorithms to search for power-constrained blocktest scheduling (PTS) profiles m a polynomial time Classical algorithms like listbased scheduling and distribution-graph based scheduling arc employed to tackle at high level the PTS problem. This approach exploits test parallelism under power constraints. This is achieved by overlaying the block-tcst intervals of compatible subcircuits to test as many of them as possible concurrently so that the maximum accumulated power dissipation is balanced and does not exceed the given limit. The test scheduling discipline assumed here is the partitioned testing with run to completion. A constant additive model is employed for power dissipation analysis and estimation throughout the algorithm

    Reliable Design of Three-Dimensional Integrated Circuits

    Get PDF

    Test Strategies for Low Power Devices

    Get PDF
    Ultra low-power devices are being developed for embedded applications in bio-medical electronics, wireless sensor networks, environment monitoring and protection, etc. The testing of these low-cost, low-power devices is a daunting task. Depending on the target application, there are stringent guidelines on the number of defective parts per million shipped devices. At the same time, since such devices are cost-sensitive, test cost is a major consideration. Since system-level power-management techniques are employed in these devices, test generation must be power-management-aware to avoid stressing the power distribution infrastructure in the test mode. Structural test techniques such as scan test, with or without compression, can result in excessive heat dissipation during testing and damage the package. False failures may result due to the electrical and thermal stressing of the device in the test mode of operation, leading to yield loss. This paper considers different aspects of testing low-power devices and some new techniques to address these problems.Design, Automation and Test in Europe (DATE \u2708), 10-14 March 2008, Munich, German

    Power constrained test scheduling in system-on-chip design

    Get PDF
    With the development of VLSI technologies, especially with the coming of deep sub-micron semiconductor process technologies, power dissipation becomes a critical factor that cannot be ignored either in normal operation or in test mode of digital systems. Test scheduling has to take into consideration of both test concurrency and power dissipation constraints. For satisfying high fault coverage goals with minimum test application time under certain power dissipation constraints, the testing of all components on the system should be performed in parallel as much as possible. The main objective of this thesis is to address the test-scheduling problem faced by SOC designers at system level. Through the analysis of several existing scheduling approaches, we enlarge the basis that current approaches based on to minimize test application time and propose an efficient and integrated technique for the test scheduling of SOCs under power-constraint. The proposed merging approach is based on a tree growing technique and can be used to overlay the block-test sessions in order to reduce further test application time. A number of experiments, based on academic benchmarks and industrial designs, have been carried out to demonstrate the usefulness and efficiency of the proposed approaches

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    Optimization of Aggregators Energy Resources considering Local Markets and Electric Vehicle Penetration

    Get PDF
    O sector elétrico tem vindo a evoluir ao longo do tempo. Esta situação deve-se ao facto de surgirem novas metodologias para lidarem com a elevada penetração dos recursos energéticos distribuídos (RED), principalmente veículos elétricos (VEs). Neste caso, a gestão dos recursos energéticos tornou-se mais proeminente devido aos avanços tecnológicos que estão a ocorrer, principalmente no contexto das redes inteligentes. Este facto torna-se importante, devido à incerteza decorrente deste tipo de recursos. Para resolver problemas que envolvem variabilidade, os métodos baseados na inteligência computacional estão a se tornar os mais adequados devido à sua fácil implementação e baixo esforço computacional, mais precisamente para o caso tratado na tese, algoritmos de computação evolucionária (CE). Este tipo de algoritmo tenta imitar o comportamento observado na natureza. Ao contrário dos métodos determinísticos, a CEé tolerante à incerteza; ou seja, é adequado para resolver problemas relacionados com os sistemas energéticos. Estes sistemas são geralmente de grandes dimensões, com um número crescente de variáveis e restrições. Aqui a IC permite obter uma solução quase ótima em tempo computacional aceitável com baixos requisitos de memória. O principal objetivo deste trabalho foi propor um modelo para a programação dos recursos energéticos dos recursos dedicados para o contexto intradiário, para a hora seguinte, partindo inicialmente da programação feita para o dia seguinte, ou seja, 24 horas para o dia seguinte. Esta programação é feita por cada agregador (no total cinco) através de meta-heurísticas, com o objetivo de minimizar os custos ou maximizar os lucros. Estes agregadores estão inseridos numa cidade inteligente com uma rede de distribuição de 13 barramentos com elevada penetração de RED, principalmente energia renovável e VEs (2000 VEs são considerados nas simulações). Para modelar a incerteza associada ao RED e aos preços de mercado, vários cenários são gerados através da simulação de Monte Carlo usando as funções de distribuição de probabilidade de erros de previsão, neste caso a função de distribuição normal para o dia seguinte. No que toca à incerteza no modelo para a hora seguinte, múltiplos cenários são gerados a partir do cenário com maior probabilidade do dia seguinte. Neste trabalho, os mercados locais de eletricidade são também utilizados como estratégia para satisfazer a equação do balanço energético onde os agregadores vão para vender o excesso de energia ou comprar para satisfazer o consumo. Múltiplas metaheurísticas de última geração são usadas para fazer este escalonamento, nomeadamente Differential Evolution (DE), Hybrid-Adaptive DE with Decay function (HyDE-DF), DE with Estimation of Distribution Algorithm (DEEDA), Cellular Univariate Marginal Distribution Algorithm with Normal-Cauchy Distribution (CUMDANCauchy++), Hill Climbing to Ring Cellular Encode-Decode UMDA (HC2RCEDUMDA). Os resultados mostram que o modelo proposto é eficaz para os múltiplos agregadores com variações de custo na sua maioria abaixo dos 5% em relação ao dia seguinte, exceto para o agregador e de VEs. É também aplicado um teste Wilcoxon para comparar o desempenho do algoritmo CUMDANCauchy++ com as restantes meta-heurísticas. O CUMDANCauchy++ mostra resultados competitivos tendo melhor performance que todos os algoritmos para todos os agregadores exceto o DEEDA que apresenta resultados semelhantes. Uma estratégia de aversão ao risco é implementada para um agregador no contexto do dia seguinte para se obter uma solução mais segura e robusta. Os resultados mostram um aumento de quase 4% no investimento, mas uma redução de até 14% para o custo dos piores cenários.The electrical sector has been evolving. This situation is because new methodologies emerge to deal with the high penetration of distributed energy resources (DER), mainly electric vehicles (EVs). In this case, energy resource management has become increasingly prominent due to the technological advances that are taking place, mainly in the context of smart grids. This factor becomes essential due to the uncertainty of this type of resource. To solve problems involving variability, methods based on computational intelligence (CI) are becoming the most suitable because of their easy implementation and low computational effort, more precisely for the case treated in this thesis, evolutionary computation (EC) algorithms. This type of algorithm tries to mimic behavior observed in nature. Unlike deterministic methods, the EC is tolerant of uncertainty, and thus it is suitable for solving problems related to energy systems. These systems are usually of high dimensions, with an increased number of variables and restrictions. Here the CI allows obtaining a near-optimal solution in good computational time with low memory requirements. This work's main objective is to propose a model for the energy resource scheduling of the dedicated resources for the intraday context, for the our-ahead, starting initially from the scheduling done for the day ahead, that is, 24 hours for the next day. This scheduling is done by each aggregator (in total five) through metaheuristics to minimize the costs or maximize the profits. These aggregators are inserted in a smart city with a distribution network of 13 buses with a high penetration of DER, mainly renewable energy and EVs (2000 EVs are considered in the simulations). Several scenarios are generated through Monte Carlo Simulation using the forecast errors' probability distribution functions, the normal distribution function for the day-ahead to model the uncertainty associated with DER and market prices. Multiple scenarios are developed through the highest probability scenario from the day-ahead when it comes to intraday uncertainty. In this work, local electricity markets are used as a mechanism to satisfy the energy balance equation where each aggregator can sell the excess of energy or buy more to meet the demand. Several recent and modern metaheuristics are used to solve the proposed problems in the thesis, namely Differential Evolution (DE), Hybrid-Adaptive DE with Decay function (HyDE-DF), DE with Estimation of Distribution Algorithm (DEEDA), Cellular Univariate Marginal Distribution Algorithm with NormalCauchy Distribution (CUMDANCauchy++), Hill Climbing to Ring Cellular Encode-Decode UMDA (HC2RCEDUMDA). Results show that the proposed model is effective for the multiple aggregators. The metaheuristics present satisfactory results and mostly less than 5% variation in costs from the day-ahead except for the EV aggregator. A Wilcoxon test is also applied to compare the performance of the CUMDANCauchy++ algorithm with the remaining metaheuristics. CUMDANCauchy++ shows competitive results beating all algorithms in all aggregators except for DEEDA, which presents similar results. A risk aversion strategy is implemented for an aggregator in the day-ahead context to get a safer and more robust solution. Results show an increase of nearly 4% in day-ahead cost but a reduction of up to 14% of worst scenario cost

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Self-Test Mechanisms for Automotive Multi-Processor System-on-Chips

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore