9 research outputs found

    Modeling of design-for-test infrastructure in complex systems-on-chips

    Get PDF
    Every integrated circuit contains a piece of design-for-test (DFT) infra- structure in order to guarantee the chip quality after manufacture. The DFT resources are employed only once in the fab and are usually not available during regular system operation. In order to assess the hardware integrity of a chip over its complete life- cycle, it is promising to reuse the DFT infrastructure as part of system- level test. In this thesis, the provided system, a Tricore processor from Infineon, must be partitioned and modified in order to enable the autonomous structural test of every component of the system in the field without expensive external tester

    Low-Capture-Power Test Generation for Scan-Based At-Speed Testing

    Get PDF
    Scan-based at-speed testing is a key technology to guarantee timing-related test quality in the deep submicron era. However, its applicability is being severely challenged since significant yield loss may occur from circuit malfunction due to excessive IR drop caused by high power dissipation when a test response is captured. This paper addresses this critical problem with a novel low-capture-power X-filling method of assigning 0\u27s and 1\u27s to unspecified (X) bits in a test cube obtained during ATPG. This method reduces the circuit switching activity in capture mode and can be easily incorporated into any test generation flow to achieve capture power reduction without any area, timing, or fault coverage impact. Test vectors generated with this practical method greatly improve the applicability of scan-based at-speed testing by reducing the risk of test yield lossIEEE International Conference on Test, 2005, 8 November 2005, Austin, TX, US

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    A Reconfigurable Broadcast Scan Compression Scheme Using Relaxation Based Test Vector Decomposition

    Get PDF
    In this paper, we propose an effective reconfigurable broadcast scan compression scheme that employs partitioning and relaxation-based test vector decomposition. Given a constraint on the number of tester channels, the technique classifies the test set into acceptable and bottleneck vectors. Bottleneck vectors are then decomposed into a set of vectors that meet the given constraint. The acceptable and decomposed test vectors are partitioned into the smallest number of partitions while satisfying the tester channels constraint to reduce the decompressor area. Thus, the technique by construction satisfies a given tester channels constraint at the expense of increased test vector count and number of partitions, offering a tradeoff between test compression, test application time and test decompression circuitry area. Experimental results demonstrate that the proposed technique achieves better compression ratio in comparison to other test compression techniques

    A Reconfigurable Broadcast Scan Compression Scheme Using Relaxation Based Test Vector Decomposition

    Get PDF
    In this paper, we propose an effective reconfigurable broadcast scan compression scheme that employs partitioning and relaxation-based test vector decomposition. Given a constraint on the number of tester channels, the technique classifies the test set into acceptable and bottleneck vectors. Bottleneck vectors are then decomposed into a set of vectors that meet the given constraint. The acceptable and decomposed test vectors are partitioned into the smallest number of partitions while satisfying the tester channels constraint to reduce the decompressor area. Thus, the technique by construction satisfies a given tester channels constraint at the expense of increased test vector count and number of partitions, offering a tradeoff between test compression, test application time and test decompression circuitry area. Experimental results demonstrate that the proposed technique achieves better compression ratio in comparison to other test compression techniques

    An Unsolicited Soliloquy on Dependency Parsing

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] This thesis presents work on dependency parsing covering two distinct lines of research. The first aims to develop efficient parsers so that they can be fast enough to parse large amounts of data while still maintaining decent accuracy. We investigate two techniques to achieve this. The first is a cognitively-inspired method and the second uses a model distillation method. The first technique proved to be utterly dismal, while the second was somewhat of a success. The second line of research presented in this thesis evaluates parsers. This is also done in two ways. We aim to evaluate what causes variation in parsing performance for different algorithms and also different treebanks. This evaluation is grounded in dependency displacements (the directed distance between a dependent and its head) and the subsequent distributions associated with algorithms and the distributions found in treebanks. This work sheds some light on the variation in performance for both different algorithms and different treebanks. And the second part of this area focuses on the utility of part-of-speech tags when used with parsing systems and questions the standard position of assuming that they might help but they certainly won’t hurt.[Resumen] Esta tesis presenta trabajo sobre análisis de dependencias que cubre dos líneas de investigación distintas. La primera tiene como objetivo desarrollar analizadores eficientes, de modo que sean suficientemente rápidos como para analizar grandes volúmenes de datos y, al mismo tiempo, sean suficientemente precisos. Investigamos dos métodos. El primero se basa en teorías cognitivas y el segundo usa una técnica de destilación. La primera técnica resultó un enorme fracaso, mientras que la segunda fue en cierto modo un ´éxito. La otra línea evalúa los analizadores sintácticos. Esto también se hace de dos maneras. Evaluamos la causa de la variación en el rendimiento de los analizadores para distintos algoritmos y corpus. Esta evaluación utiliza la diferencia entre las distribuciones del desplazamiento de arista (la distancia dirigida de las aristas) correspondientes a cada algoritmo y corpus. También evalúa la diferencia entre las distribuciones del desplazamiento de arista en los datos de entrenamiento y prueba. Este trabajo esclarece las variaciones en el rendimiento para algoritmos y corpus diferentes. La segunda parte de esta línea investiga la utilidad de las etiquetas gramaticales para los analizadores sintácticos.[Resumo] Esta tese presenta traballo sobre análise sintáctica, cubrindo dúas liñas de investigación. A primeira aspira a desenvolver analizadores eficientes, de maneira que sexan suficientemente rápidos para procesar grandes volumes de datos e á vez sexan precisos. Investigamos dous métodos. O primeiro baséase nunha teoría cognitiva, e o segundo usa unha técnica de destilación. O primeiro método foi un enorme fracaso, mentres que o segundo foi en certo modo un éxito. A outra liña avalúa os analizadores sintácticos. Esto tamén se fai de dúas maneiras. Avaliamos a causa da variación no rendemento dos analizadores para distintos algoritmos e corpus. Esta avaliaci´on usa a diferencia entre as distribucións do desprazamento de arista (a distancia dirixida das aristas) correspondentes aos algoritmos e aos corpus. Tamén avalía a diferencia entre as distribucións do desprazamento de arista nos datos de adestramento e proba. Este traballo esclarece as variacións no rendemento para algoritmos e corpus diferentes. A segunda parte desta liña investiga a utilidade das etiquetas gramaticais para os analizadores sintácticos.This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150) and from the Centro de Investigación de Galicia (CITIC) which is funded by the Xunta de Galicia and the European Union (ERDF - Galicia 2014-2020 Program) by grant ED431G 2019/01.Xunta de Galicia; ED431G 2019/0

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Radar Technology

    Get PDF
    In this book “Radar Technology”, the chapters are divided into four main topic areas: Topic area 1: “Radar Systems” consists of chapters which treat whole radar systems, environment and target functional chain. Topic area 2: “Radar Applications” shows various applications of radar systems, including meteorological radars, ground penetrating radars and glaciology. Topic area 3: “Radar Functional Chain and Signal Processing” describes several aspects of the radar signal processing. From parameter extraction, target detection over tracking and classification technologies. Topic area 4: “Radar Subsystems and Components” consists of design technology of radar subsystem components like antenna design or waveform design

    A NEW TREATMENT OF LOW PROBABILITY EVENTS WITH PARTICULAR APPLICATION TO NUCLEAR POWER PLANT INCIDENTS

    Get PDF
    PhDTechnological innovation is inescapable if civilisation is to continue in the face of population growth, rising expectations and resource exhaustion. Unfortunately, major innovations, confidently thought to be safe, occasionally fail catastrophically. The fears so engendered are impeding technical progress generally and that of nuclear power in particular. Attempts to allay disquiet about these disastrous Low Probability Events (LPEs) by exhaustive studies of nuclear power plant designs have, so far, been less than successful. The New Treatment adopts instead an approach that, after examination of the LPE in its historical and societal settings, combines theoretical design analysis with construction site and operational realities in pragmatic engineering, the quality of which can be assured by accountable inspection. The LPE is envisaged as a singularity in a stream of largely mundane, but untoward incidents, described as 'Event-noise'. Predictions of the likelihood of plant LPEs by frequency-theory probability are illusory because the LPE is unique and not part of a stable distribution. Again, noise analysis seems to lead to intractable mathematical expressions. While theoretical LPE prognostications depend on the identification of fault sequences in design that can either be designed-out or reduced to plausibly negligible probabilities, the reality of LPE prevention lies with the plant in operation. As absolute safety is unattainable, the approach aims at ensuring that the perceived residual nuclear risk is societally tolerable. An adaption of elementary Catastrophe theory to model the prospective Event-noise field to be experienced by the plant is proposed whereby potential, credible LPEs could be more readily discerned and avoided. In this milieu of increasing sophistication in technology when management in the traditional administrative mold is proving inadequate, the engineer emerges as the proper central decision-maker. The special intellectual capability needed is acquired during his training and experience, a claim that can draw support from new studies in neuropsychology. The Nuclear Installation Inspectorate is cited as an exemplar of a body practising the kind of engineering inspection needed to apprehend those human fallibilities to which most catastrophic failures of technology are due. Nevertheless, such regulatory systems lack accountability and, as Goedel's theorem suggests, cannot assess their own efficiency. Independent appraisal by Signal Detection Theory is suggested as a remedy
    corecore