18 research outputs found

    Observability Driven Path Generation for Delay Test

    Get PDF
    This research describes an approach for path generation using an observability metric for delay test. K Longest Path Per Gate (KLPG) tests are generated for sequential circuits. A transition launched from a scan flip-flop (SFF) is captured into another SFF during at-speed clock cycles, that is, clock cycles at the rated design speed. The generated path is a ‘longest path’ suitable for delay test. The path generation algorithm then utilizes observability of the fan-out gates in the consecutive, lower-speed clock cycles, known as coda cycles, to generate paths ending at a SFF, to capture the transition from the at-speed cycles. For a given clocking scheme defined by the number of coda cycles, if the final flip-flop is not scan-enabled, the path generation algorithm attempts to generate a different path that ends at a SFF, located in a different branch of the circuit fan-out, indicated by lower observability. The paths generated over multiple cycles are sequentially justified using Boolean satisfiability. The observability metric optimizes the path generation in the coda cycles by always attempting to grow the path through the branch with the best observability and never generating a path that ends at a non-scan flip-flop. The algorithm has been developed in C++. The experiments have been performed on an Intel Core i7 machine with 64GB RAM. Various ISCAS benchmark circuits have been used with various KLPG configurations for code evaluation. Multiple configurations have been used for the experiments. The combinations of the values of K [1, 2, 3, 4, 5] and number of coda cycles [1, 2, 3] have been used to characterize the implementation. A sublinear rise is run time has been observed with increasing K values. The total number of tested paths rise with K and falls with number of coda cycles, due to the increasing number of constraints on the path, particularly due to the fixed inputs

    Scalable manipulation of proteins using magnetic adsorbents

    Get PDF

    VLSI signal processing through bit-serial architectures and silicon compilation

    Get PDF

    Molecular approaches to understand the effect of acetic acid on Escherichia coli

    Get PDF
    Acetic acid has long been known for its antibacterial activity which can be used to treat infected burn wounds. However, there is a lack of a detailed mechanism on acetic acid's effect on E. coli at a molecular level. To learn more about this, we have used Transposon Directed Insertion Sequencing (TraDIS) to investigate the molecular mechanisms by which acetic acid acts as an antibacterial agent, by identifying non-essential genes whose loss alters the fitness of different strains of E. coli. We grew transposon libraries in three different strains of E. coli (uropathogenic E. coli EO499 (serotype 131), uropathogenic UTI89, and the lab strain MG1655) in M9 media + 0.2% casamino acids and 0.2% glucose at neutral pH 7 and mildly acidic pH 5.5, with or without acetic acid. Libraries were sequenced pre- and post-growth, using a transposon-specific primer to generate positions and frequencies for each transposon. RPKMS and insertion indices were generated by a TraDIS pipeline. To determine the impact of acetic acid on gene fitness value, the numbers of reads before and after the stress were compared for each gene in each strain. This enables us to identify genes where transposon inserts lead to a decrease or increase of fitness under acetic acid stress. Comparing the results between the strains will enable the identification of both strain-specific genes and genes shared between strains that have a role in fitness under acetic acid stress. This project consists of two parts. In the first part, we evaluated the roles of candidate genes identified in a previously generated EO499 TraDIS library under acetic acid. Eight of these were depleted under acetic acid stress and they were chosen for further study: nuoM, nuoG, sucA, sthA, pitA, apaH, rssB and ytfP. Because of the difficulties of constructing gene deletions in the uropathogenic strain for validating the TraDIS results, we tested the relative fitness of the corresponding gene deletion mutants from the Keio library (in lab strain BW25113), with the growth conditions used for EO499. Interestingly, only a few knockouts showed a reduction in relative fitness in competitions at pH 5.5 with acetic acid. This may occur due to the differences between strains used in TraDIS and competition. To overcome this issue, we have also isolated transposon mutants from E. coli EO499 transposon library to determine relative fitness. In the second part, we have optimized and constructed a UTI89 transposon library. The three E. coli (EO499, MG1655 and UTI89) transposon libraries were subjected to acetic acid stress as described, for a passaged for a time course of five days. Only day one and day five were sequenced. For the analysis, several bioinformatic pipelines were used for E. coli genome annotation and sequencing analysis. TraDIS allowed us to identify essential genes in three E. coli strains. The results presented here show which genes tend to be enriched or depleted under acetic acid

    Low-cost and efficient fault detection and diagnosis schemes for modern cores

    Get PDF
    Continuous improvements in transistor scaling together with microarchitectural advances have made possible the widespread adoption of high-performance processors across all market segments. However, the growing reliability threats induced by technology scaling and by the complexity of designs are challenging the production of cheap yet robust systems. Soft error trends are haunting, especially for combinational logic, and parity and ECC codes are therefore becoming insufficient as combinational logic turns into the dominant source of soft errors. Furthermore, experts are warning about the need to also address intermittent and permanent faults during processor runtime, as increasing temperatures and device variations will accelerate inherent aging phenomena. These challenges specially threaten the commodity segments, which impose requirements that existing fault tolerance mechanisms cannot offer. Current techniques based on redundant execution were devised in a time when high penalties were assumed for the sake of high reliability levels. Novel light-weight techniques are therefore needed to enable fault protection in the mass market segments. The complexity of designs is making post-silicon validation extremely expensive. Validation costs exceed design costs, and the number of discovered bugs is growing, both during validation and once products hit the market. Fault localization and diagnosis are the biggest bottlenecks, magnified by huge detection latencies, limited internal observability, and costly server farms to generate test outputs. This thesis explores two directions to address some of the critical challenges introduced by unreliable technologies and by the limitations of current validation approaches. We first explore mechanisms for comprehensively detecting multiple sources of failures in modern processors during their lifetime (including transient, intermittent, permanent and also design bugs). Our solutions embrace a paradigm where fault tolerance is built based on exploiting high-level microarchitectural invariants that are reusable across designs, rather than relying on re-execution or ad-hoc block-level protection. To do so, we decompose the basic functionalities of processors into high-level tasks and propose three novel runtime verification solutions that combined enable global error detection: a computation/register dataflow checker, a memory dataflow checker, and a control flow checker. The techniques use the concept of end-to-end signatures and allow designers to adjust the fault coverage to their needs, by trading-off area, power and performance. Our fault injection studies reveal that our methods provide high coverage levels while causing significantly lower performance, power and area costs than existing techniques. Then, this thesis extends the applicability of the proposed error detection schemes to the validation phases. We present a fault localization and diagnosis solution for the memory dataflow by combining our error detection mechanism, a new low-cost logging mechanism and a diagnosis program. Selected internal activity is continuously traced and kept in a memory-resident log whose capacity can be expanded to suite validation needs. The solution can catch undiscovered bugs, reducing the dependence on simulation farms that compute golden outputs. Upon error detection, the diagnosis algorithm analyzes the log to automatically locate the bug, and also to determine its root cause. Our evaluations show that very high localization coverage and diagnosis accuracy can be obtained at very low performance and area costs. The net result is a simplification of current debugging practices, which are extremely manual, time consuming and cumbersome. Altogether, the integrated solutions proposed in this thesis capacitate the industry to deliver more reliable and correct processors as technology evolves into more complex designs and more vulnerable transistors.El continuo escalado de los transistores junto con los avances microarquitectónicos han posibilitado la presencia de potentes procesadores en todos los segmentos de mercado. Sin embargo, varios problemas de fiabilidad están desafiando la producción de sistemas robustos. Las predicciones de "soft errors" son inquietantes, especialmente para la lógica combinacional: soluciones como ECC o paridad se están volviendo insuficientes a medida que dicha lógica se convierte en la fuente predominante de soft errors. Además, los expertos están alertando acerca de la necesidad de detectar otras fuentes de fallos (causantes de errores permanentes e intermitentes) durante el tiempo de vida de los procesadores. Los segmentos "commodity" son los más vulnerables, ya que imponen unos requisitos que las técnicas actuales de fiabilidad no ofrecen. Estas soluciones (generalmente basadas en re-ejecución) fueron ideadas en un tiempo en el que con tal de alcanzar altos nivel de fiabilidad se asumían grandes costes. Son por tanto necesarias nuevas técnicas que permitan la protección contra fallos en los segmentos más populares. La complejidad de los diseños está encareciendo la validación "post-silicon". Su coste excede el de diseño, y el número de errores descubiertos está aumentando durante la validación y ya en manos de los clientes. La localización y el diagnóstico de errores son los mayores problemas, empeorados por las altas latencias en la manifestación de errores, por la poca observabilidad interna y por el coste de generar las señales esperadas. Esta tesis explora dos direcciones para tratar algunos de los retos causados por la creciente vulnerabilidad hardware y por las limitaciones de los enfoques de validación. Primero exploramos mecanismos para detectar múltiples fuentes de fallos durante el tiempo de vida de los procesadores (errores transitorios, intermitentes, permanentes y de diseño). Nuestras soluciones son de un paradigma donde la fiabilidad se construye explotando invariantes microarquitectónicos genéricos, en lugar de basarse en re-ejecución o en protección ad-hoc. Para ello descomponemos las funcionalidades básicas de un procesador y proponemos tres soluciones de `runtime verification' que combinadas permiten una detección de errores a nivel global. Estas tres soluciones son: un verificador de flujo de datos de registro y de computación, un verificador de flujo de datos de memoria y un verificador de flujo de control. Nuestras técnicas usan el concepto de firmas y permiten a los diseñadores ajustar los niveles de protección a sus necesidades, mediante compensaciones en área, consumo energético y rendimiento. Nuestros estudios de inyección de errores revelan que los métodos propuestos obtienen altos niveles de protección, a la vez que causan menos costes que las soluciones existentes. A continuación, esta tesis explora la aplicabilidad de estos esquemas a las fases de validación. Proponemos una solución de localización y diagnóstico de errores para el flujo de datos de memoria que combina nuestro mecanismo de detección de errores, junto con un mecanismo de logging de bajo coste y un programa de diagnóstico. Cierta actividad interna es continuamente registrada en una zona de memoria cuya capacidad puede ser expandida para satisfacer las necesidades de validación. La solución permite descubrir bugs, reduciendo la necesidad de calcular los resultados esperados. Al detectar un error, el algoritmo de diagnóstico analiza el registro para automáticamente localizar el bug y determinar su causa. Nuestros estudios muestran un alto grado de localización y de precisión de diagnóstico a un coste muy bajo de rendimiento y área. El resultado es una simplificación de las prácticas actuales de depuración, que son enormemente manuales, incómodas y largas. En conjunto, las soluciones de esta tesis capacitan a la industria a producir procesadores más fiables, a medida que la tecnología evoluciona hacia diseños más complejos y más vulnerables

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    Production accompanying testing of the ATLAS Pixel module

    Get PDF
    The ATLAS Pixel detector, innermost sub-detector of the ATLAS experiment at LHC, CERN, can be sensibly tested in its entirety the first time after its installation in 2006. Because of the poor accessibility (probably once per year) of the Pixel detector and tight scheduling the replacement of damaged modules after integration as well as during operation will become a highly exposed business. Therefore and to ensure that no affected parts will be used in following production steps, it is necessary that each production step is accompanied by testing the components before assembly and make sure the operativeness afterwards. Probably 300 of about total 2000 semiconductor hybrid pixel detector modules will be build at the Universität Dortmund. Thus a production test setup has been build up and examined before starting serial production. These tests contain the characterization and inspection of the module components and the module itself under different environmental conditions and diverse operating parameters. Once a module is assembled the operativeness is tested with a radioactive source and the long-time stability is assured by a burn-in. A fully electrical characterization is the basis for module selection and sorting for the ATLAS Pixel detector. Additionally the charge collection behavior of irradiated and non irradiated modules has been investigated in the H8 beamline with 180 GeV pions
    corecore