766 research outputs found

    Detection and Diagnosis of Out-of-Specification Failures in Mixed-Signal Circuits

    Get PDF
    Verifying whether a circuit meets its intended specifications, as well as diagnosing the circuits that do not, is indispensable at every stage of integrated circuit design. Otherwise, a significant portion of fabricated circuits could fail or behave correctly only under certain conditions. Shrinking process technologies and increased integration has further complicated this task. This is especially true of mixed-signal circuits, where a slight parametric shift in an analog component can change the output significantly. We are thus rapidly approaching a proverbial wall, where migrating existing circuits to advanced technology nodes and/or designing the next generation circuits may not be possible without suitable verification and debug strategies. Traditional approaches target accuracy and not scalability, limiting their use to high-dimensional systems. Relaxing the accuracy requirement mitigates the computational cost. Simultaneously, quantifying the level of inaccuracy retains the effectiveness of these metrics. We exercise this accuracy vs. turn-around-time trade-off to deal with multiple mixed-signal problems across both the pre- and post-silicon domains. We first obtain approximate failure probability estimates along with their confidence bands using limited simulation budgets. We then generate “failure regions” that naturally explain the parametric interactions resulting in predicted failures. These two pre-silicon contributions together enable us to estimate and reduce the failure probability, which we demonstrate on a high-dimensional phase-locked loop test-case. We leverage this pre-silicon knowledge towards test-set selection and post-silicon debug to alleviate the limited controllability and observability in the post-silicon domain. We select a set of test-points that maximizes the probability of observing failures. We then use post-silicon measurements at these test-points to identify systematic deviations from pre-silicon belief. This is demonstrated using the phase-locked loop test-case, where we boost the number of failures to observable levels and use the obtained measurements to root-cause underlying parametric shifts. The pre-silicon contributions can also be extended to perform equivalence checking and to help diagnose detected model-mismatches. The resultant calibrated model allows us to apply our work to the system level as well. The equivalence checking and model-mismatch diagnosis is successfully demonstrated using a high-level abstraction model for the phase-locked loop test-case

    Growth strategies and poverty reduction: the institutional complementarity hypothesis

    Get PDF
    This article starts from the limits of the policies that assume a significant de-connection between antipoverty strategies and the logic of the growth regime and that mainly rely upon market mechanisms. By contrast, a branch of the new institutional economics argues that a complete set of coordinating mechanisms is constitutive of really existing economies and that they are more complementary than substitute. The Institutional Complementarity Hypothesis (ICH) may be useful for analyzing simultaneously the antipoverty policies and the viability of growth regimes. The different brands of capitalism are the outcome of complementary institutions concerning competition, labor market institutions, welfare and innovation systems. Generally, such configurations cannot be emulated by poor developing countries, but reviewing the preliminary findings of the UNRISD country case studies suggests some common features to all successful experiments. Basically, antipoverty policies are efficient when they create the equivalent of virtuous circles within which growth entitles antipoverty programs and conversely these programs sustain the speed and stability of growth. Two methods are proposed in order to detect possible complementarities and design accordingly economic policies: the Qualitative Comparative Analysis (QCA) on one side, national growth diagnosis on the other side. A special attention is devoted to the timing of policies and the role of policy regimes. A brief conclusion wraps up the major findings and proposes a research agenda.development theory ; antipoverty policy ; Washington consensus ; new institutional economics ; institutional complementarity hypothesis ; qualitative comparative analysis ; growth diagnosis

    Growth strategies and poverty reduction: the institutional complementarity hypothesis

    Get PDF
    This article starts from the limits of the policies that assume a significant de-connection between antipoverty strategies and the logic of the growth regime and that mainly rely upon market mechanisms. By contrast, a branch of the new institutional economics argues that a complete set of coordinating mechanisms is constitutive of really existing economies and that they are more complementary than substitute. The Institutional Complementarity Hypothesis (ICH) may be useful for analyzing simultaneously the antipoverty policies and the viability of growth regimes. The different brands of capitalism are the outcome of complementary institutions concerning competition, labor market institutions, welfare and innovation systems. Generally, such configurations cannot be emulated by poor developing countries, but reviewing the preliminary findings of the UNRISD country case studies suggests some common features to all successful experiments. Basically, antipoverty policies are efficient when they create the equivalent of virtuous circles within which growth entitles antipoverty programs and conversely these programs sustain the speed and stability of growth. Two methods are proposed in order to detect possible complementarities and design accordingly economic policies: the Qualitative Comparative Analysis (QCA) on one side, national growth diagnosis on the other side. A special attention is devoted to the timing of policies and the role of policy regimes. A brief conclusion wraps up the major findings and proposes a research agenda.Au cours des deux dernières décennies, les politiques qui supposaient une séparabilité des stratégies de lutte contre la pauvreté et de la logique du régime de croissance ont montré leurs limites. De plus, elles étaient basées essentiellement sur l'activation de mécanismes de marché. Par contraste les recherches en économie institutionnelle soulignent que la résilience et la performance des économies dérivent de la cohérence d'un ensemble de mécanismes de coordination, qui sont plus complémentaires que substituts au marché. L'hypothèse de complémentarité institutionnelle s'avère précieuse pour analyser simultanément les politiques anti-pauvreté et la dynamisation des régimes de croissance. Les différentes formes de capitalisme sont l'expression de complémentarités différentes entre le type de compétition, les institutions du marché du travail, la couverture sociale et le système d'innovation. Très généralement, les économies en voie de développement ne peuvent imiter ces configurations mais les études de cas nationales menées par l'UNRISD font ressortir des traits communs à la plupart des expériences réussies. Elles ont créé l'équivalent de cercles vertueux au sein desquels stimulation de la croissance et programme anti-pauvreté se confortent l'une l'autre. L'article propose deux méthodes pour détecter ces complémentarités : l'analyse qualitative comparative d'une part, les diagnostics des facteurs limitant la croissance de l'autre. Il importe aussi de prendre en compte la séquence des différentes mesures et le temps long du changement institutionnel

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi

    An efficient logic fault diagnosis framework based on effect-cause approach

    Get PDF
    Fault diagnosis plays an important role in improving the circuit design process and the manufacturing yield. With the increasing number of gates in modern circuits, determining the source of failure in a defective circuit is becoming more and more challenging. In this research, we present an efficient effect-cause diagnosis framework for combinational VLSI circuits. The framework consists of three stages to obtain an accurate and reasonably precise diagnosis. First, an improved critical path tracing algorithm is proposed to identify an initial suspect list by backtracing from faulty primary outputs toward primary inputs. Compared to the traditional critical path tracing approach, our algorithm is faster and exact. Second, a novel probabilistic ranking model is applied to rank the suspects so that the most suspicious one will be ranked at or near the top. Several fast filtering methods are used to prune unrelated suspects. Finally, to refine the diagnosis, fault simulation is performed on the top suspect nets using several common fault models. The difference between the observed faulty behavior and the simulated behavior is used to rank each suspect. Experimental results on ISCAS85 benchmark circuits show that this diagnosis approach is efficient both in terms of memory space and CPU time and the diagnosis results are accurate and reasonably precise

    Machine learning support for logic diagnosis

    Get PDF

    Network-on-Chip -based Multi-Processor System-on-Chip: Towards Mixed-Criticality System Certification

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Research and Technology, 1994

    Get PDF
    This report selectively summarizes the NASA Lewis Research Center's research and technology accomplishments for the fiscal year 1994. It comprises approximately 200 short articles submitted by the staff members of the technical directorates. The report is organized into six major sections: Aeronautics, Aerospace Technology, Space Flight Systems, Engineering and Computational Support, Lewis Research Academy, and Technology Transfer. A table of contents and author index have been developed to assist the reader in finding articles of special interest. This report is not intended to be a comprehensive summary of all research and technology work done over the past fiscal year. Most of the work is reported in Lewis-published technical reports, journal articles, and presentations prepared by Lewis staff members and contractors. In addition, university grants have enabled faculty members and graduate students to engage in sponsored research that is reported at technical meetings or in journal articles. For each article in this report a Lewis contact person has been identified, and where possible, reference documents are listed so that additional information can be easily obtained. The diversity of topics attests to the breadth of research and technology being pursued and to the skill mix of the staff that makes it possible

    2.5D Chiplet Architecture for Embedded Processing of High Velocity Streaming Data

    Get PDF
    This dissertation presents an energy efficient 2.5D chiplet-based architecture for real-time probabilistic processing of high-velocity sensor data, from an autonomous real-time ubiquitous surveillance imaging system. This work addresses problems at all levels of description. At the lowest physical level, new standard cell libraries have been developed for ultra-low voltage CMOS synthesis, as well as custom SRAM memory blocks, and mixed-signal physical true random number generators based on the perturbation of Sigma-Delta structures using random telegraph noise (RTN) in single transistor devices. At the chip level architecture, an innovative compact buffer-less switched circuit mesh network on chip (NoC) capable of reaching very high throughput (1.6Tbps), finite packet delay delivery, free from packet dropping, and free from dead-locks and live-locks, was designed for this chiplet-based solution. Additionally, a second NoC connecting processors in the network, was implemented based on token-rings, allowing access to external DDR memory. Furthermore, a new clock tree distribution network, and a wide bandwidth DRAM physical interface have been designed to address the data flow requirements within and across chiplets. At the algorithm and representation levels, the Online Change Point Detection (CPD) algorithm has been implemented for on-line learning of background-foreground segmentation. Instead of using traditional binary representation of numbers, this architecture relies on unconventional processing of signals using a bio-inspired (spike-based) unary representation of numbers, where these numbers are represented in a stochastic stream of Bernoulli random variables. By using this representation, probabilistic algorithms can be executed in a native architecture with precision on demand, where if more accuracy is required, more computational time and power can be allocated. The SoC chiplet architecture has been extensively simulated and validated using state of the art CAD methodology, and has been submitted to fabrication in a dedicated 55nm GF CMOS technology wafer run. Experimental results from fabricated test chips in the same technology are also presented
    corecore