102 research outputs found

    Automatic test pattern generation for asynchronous circuits

    Get PDF
    The testability of integrated circuits becomes worse with transistor dimensions reaching nanometer scales. Testing, the process of ensuring that circuits are fabricated without defects, becomes inevitably part of the design process; a technique called design for test (DFT). Asynchronous circuits have a number of desirable properties making them suitable for the challenges posed by modern technologies, but are severely limited by the unavailability of EDA tools for DFT and automatic test-pattern generation (ATPG). This thesis is motivated towards developing test generation methodologies for asynchronous circuits. In total four methods were developed which are aimed at two different fault models: stuck-at faults at the basic logic gate level and transistor-level faults. The methods were evaluated using a set of benchmark circuits and compared favorably to previously published work. First, ABALLAST is a partial-scan DFT method adapting the well-known BALLAST technique for asynchronous circuits where balanced structures are used to guide the selection of the state-holding elements that will be scanned. The test inputs are automatically provided by a novel test pattern generator, which uses time frame unrolling to deal with the remaining, non-scanned sequential C-elements. The second method, called AGLOB, uses algorithms from strongly-connected components in graph graph theory as a method for finding the optimal position of breaking the loops in the asynchronous circuit and adding scan registers. The corresponding ATPG method converts cyclic circuits into acyclic for which standard tools can provide test patterns. These patterns are then automatically converted for use in the original cyclic circuits. The third method, ASCP, employs a new cycle enumeration method to find the loops present in a circuit. Enumerated cycles are then processed using an efficient set covering heuristic to select the scan elements for the circuit to be tested.Applying these methods to the benchmark circuits shows an improvement in fault coverage compared to previous work, which, for some circuits, was substantial. As no single method consistently outperforms the others in all benchmarks, they are all valuable as a designer’s suite of tools for testing. Moreover, since they are all scan-based, they are compatible and thus can be simultaneously used in different parts of a larger circuit. In the final method, ATRANTE, the main motivation of developing ATPG is supplemented by transistor level test generation. It is developed for asynchronous circuits designed using a State Transition Graph (STG) as their specification. The transistor-level circuit faults are efficiently mapped onto faults that modify the original STG. For each potential STG fault, the ATPG tool provides a sequence of test vectors that expose the difference in behavior to the output ports. The fault coverage obtained was 52-72 % higher than the coverage obtained using the gate level tests. Overall, four different design for test (DFT) methods for automatic test pattern generation (ATPG) for asynchronous circuits at both gate and transistor level were introduced in this thesis. A circuit extraction method for representing the asynchronous circuits at a higher level of abstraction was also implemented. Developing new methods for the test generation of asynchronous circuits in this thesis facilitates the test generation for asynchronous designs using the CAD tools available for testing the synchronous designs. Lessons learned and the research questions raised due to this work will impact the future work to probe the possibilities of developing robust CAD tools for testing the future asynchronous designs

    Design for testability method at register transfer level

    Get PDF
    The testing of sequential circuit is more complex compared to combinational circuit because it needs a sequence of vectors to detect a fault. Its test cost increases with the complexity of the sequential circuit-under-test (CUT). Thus, design for testability (DFT) concept has been introduced to reduce testing complexity, as well as to improve testing effectiveness and efficiency. Scan technique is one of the mostly used DFT method. However, it has cost overhead in terms of area due to the number of added multiplexers for each flip-flop, and test application time due to shifting of test patterns. This research is motivated to introduce non-scan DFT method at register transfer level (RTL) in order to reduce test cost. DFT at RTL level is done based on functional information of the CUT and the connectivity of CUT registers. The process of chaining a register to another register is more effective in terms of area overhead and test application time. The first contribution of this work is the introduction of a non-scan DFT method at the RTL level that considers the information of controllability and observability of CUT that can be extracted from RTL description. It has been proven through simulation that the proposed method has higher fault coverage of around 90%, shorter test application time, shorter test generation time and 10% reduction in area overhead compared to other methods in literature for most benchmark circuits. The second contribution of this work is the introduction of built-in self-test (BIST) method at the RTL level which uses multiple input signature registers (MISRs) as BIST components instead of concurrent built-in logic block observers (CBILBOs). The selection of MISR as test register is based on extended minimum feedback vertex set algorithm. This new BIST method results in lower area overhead by about 32.9% and achieves similar higher fault coverage compared to concurrent BIST method. The introduction of non-scan DFT at the RTL level is done before logic synthesis process. Thus, the testability violations can be fixed without repeating the logic synthesis process during DFT insertion at the RTL level

    Power Quality

    Get PDF
    Electrical power is becoming one of the most dominant factors in our society. Power generation, transmission, distribution and usage are undergoing signifi cant changes that will aff ect the electrical quality and performance needs of our 21st century industry. One major aspect of electrical power is its quality and stability – or so called Power Quality. The view on Power Quality did change over the past few years. It seems that Power Quality is becoming a more important term in the academic world dealing with electrical power, and it is becoming more visible in all areas of commerce and industry, because of the ever increasing industry automation using sensitive electrical equipment on one hand and due to the dramatic change of our global electrical infrastructure on the other. For the past century, grid stability was maintained with a limited amount of major generators that have a large amount of rotational inertia. And the rate of change of phase angle is slow. Unfortunately, this does not work anymore with renewable energy sources adding their share to the grid like wind turbines or PV modules. Although the basic idea to use renewable energies is great and will be our path into the next century, it comes with a curse for the power grid as power fl ow stability will suff er. It is not only the source side that is about to change. We have also seen signifi cant changes on the load side as well. Industry is using machines and electrical products such as AC drives or PLCs that are sensitive to the slightest change of power quality, and we at home use more and more electrical products with switching power supplies or starting to plug in our electric cars to charge batt eries. In addition, many of us have begun installing our own distributed generation systems on our rooft ops using the latest solar panels. So we did look for a way to address this severe impact on our distribution network. To match supply and demand, we are about to create a new, intelligent and self-healing electric power infrastructure. The Smart Grid. The basic idea is to maintain the necessary balance between generators and loads on a grid. In other words, to make sure we have a good grid balance at all times. But the key question that you should ask yourself is: Does it also improve Power Quality? Probably not! Further on, the way how Power Quality is measured is going to be changed. Traditionally, each country had its own Power Quality standards and defi ned its own power quality instrument requirements. But more and more international harmonization efforts can be seen. Such as IEC 61000-4-30, which is an excellent standard that ensures that all compliant power quality instruments, regardless of manufacturer, will produce of measurement instruments so that they can also be used in volume applications and even directly embedded into sensitive loads. But work still has to be done. We still use Power Quality standards that have been writt en decades ago and don’t match today’s technology any more, such as fl icker standards that use parameters that have been defi ned by the behavior of 60-watt incandescent light bulbs, which are becoming extinct. Almost all experts are in agreement - although we will see an improvement in metering and control of the power fl ow, Power Quality will suff er. This book will give an overview of how power quality might impact our lives today and tomorrow, introduce new ways to monitor power quality and inform us about interesting possibilities to mitigate power quality problems. Regardless of any enhancements of the power grid, “Power Quality is just compatibility” like my good old friend and teacher Alex McEachern used to say. Power Quality will always remain an economic compromise between supply and load. The power available on the grid must be suffi ciently clean for the loads to operate correctly, and the loads must be suffi ciently strong to tolerate normal disturbances on the grid

    Assessing the Role of Magnetite in Municipal Wastewater Treatment

    Get PDF
    Some municipal wastewater treatment (MWWT) facilities have adopted magnetite in their treatment processes through a technology called BioMag® to meet effluent regulatory requirements for total nitrogen and total phosphorus. However, there is limited information on the mechanisms and efficiency of magnetite in the removal of nitrogen (N) and phosphorus (P) from wastewater. This research, therefore, estimated its effectiveness in the removal of these nutrients, with a case study of the Marlay-Taylor Water Reclamation Facility in Maryland. The intervention analysis model was used, but a new forecasting approach to the model was proposed to fit the data in this study and other similar data. Results showed a significant improvement in both N and P removal. Graphical analyses showed an improvement in operating parameters like the mixed liquor suspended solids and sludge volume index. An account of the N and P removal mechanisms by the magnetite was also provided.Some MWWT facilities using magnetite in their treatment process stabilize their waste sludge using anaerobic digestion (AD) and produce biogas. Therefore, laboratory studies were conducted to determine the effect of magnetite on biogas production (mainly methane and carbon dioxide) and on hydrogen sulfide (H2S) gas reduction. Results showed no significant differences in biogas production, contrary to some studies which reported increases in methane yield with magnetite addition. H2S in the biogas reduced below the concentration that is immediately dangerous to life and health (IDLH). An increase in dissolved iron was also noted. Some recent studies that used magnetite and other conductive materials in AD experiments reported elemental sulfur (So) formation in the digesters. However, previous research that used iron compounds reported iron sulfide (FeS) formation as the mechanism of H2S reduction. Therefore, a bioenergetics model was used to determine if the oxidation of H2S to So is theoretically possible in the AD environment. So formation could also occur due to air presence or leakage in the digesters. Results showed that the reaction leading to So formation was exothermic, implying that energy was produced which could support microbial growth. However, conductive material may be required to initiate this reaction by facilitating electron transfer

    Music similarity analysis using the big data framework spark

    Get PDF
    A parameterizable recommender system based on the Big Data processing framework Spark is introduced, which takes multiple tonal properties of music into account and is capable of recommending music based on a user's personal preferences. The implemented system is fully scalable; more songs can be added to the dataset, the cluster size can be increased, and the possibility to add different kinds of audio features and more state-of-the-art similarity measurements is given. This thesis also deals with the extraction of the required audio features in parallel on a computer cluster. The extracted features are then processed by the Spark based recommender system, and song recommendations for a dataset consisting of approximately 114000 songs are retrieved in less than 12 seconds on a 16 node Spark cluster, combining eight different audio feature types and similarity measurements.Ein parametrisierbares Empfehlungssystem, basierend auf dem Big Data Framework Spark, wird präsentiert. Dieses berücksichtigt verschiedene klangliche Eigenschaften der Musik und erstellt Musikempfehlungen basierend auf den persönlichen Vorlieben eines Nutzers. Das implementierte Empfehlungssystem ist voll skalierbar. Mehr Lieder können dem Datensatz hinzugefügt werden, mehr Rechner können in das Computercluster eingebunden werden und die Möglichkeit andere Audiofeatures und aktuellere Ähnlichkeitsmaße hizuzufügen und zu verwenden, ist ebenfalls gegeben. Des Weiteren behandelt die Arbeit die parallele Berechnung der benötigten Audiofeatures auf einem Computercluster. Die Features werden von dem auf Spark basierenden Empfehlungssystem verarbeitet und Empfehlungen für einen Datensatz bestehend aus ca. 114000 Liedern können unter Berücksichtigung von acht verschiedenen Arten von Audiofeatures und Abstandsmaßen innerhalb von zwölf Sekunden auf einem Computercluster mit 16 Knoten berechnet werden

    Improving Power, Performance and Area with Test: A Case Study

    Get PDF
    As more low power devices are needed for applications such as Internet of Things, reducing power and area is becoming more critical. Reducing power consumption and area caused by full scan design-for-test should be considered as a way to help achieve these stricter requirements. This is especially important for designs that use near-threshold technology. In this work, we use partial scan to improve power, performance and area on a graphics processing unit shader block. We present our non-scan D flip-flop (DFF) selection algorithm that maximizes non-scan DFF count while achieving automatic test pattern generation results close to those of the full scan design. We identify a category of stuck-at faults that are unique to partial scan designs and propose a check to identify and contain them. Our final test coverage of the partial scan design is within 0.1% of the full scan test coverage for both stuck-at and transition delay fault models. In addition, we present the PPA (power, performance and area) results for both the full scan and partial scan designs. The most noteworthy improvement is seen in the hold total negative slack

    Improving Power, Performance and Area with Test: A Case Study

    Get PDF
    As more low power devices are needed for applications such as Internet of Things, reducing power and area is becoming more critical. Reducing power consumption and area caused by full scan design-for-test should be considered as a way to help achieve these stricter requirements. This is especially important for designs that use near-threshold technology. In this work, we use partial scan to improve power, performance and area on a graphics processing unit shader block. We present our non-scan D flip-flop (DFF) selection algorithm that maximizes non-scan DFF count while achieving automatic test pattern generation results close to those of the full scan design. We identify a category of stuck-at faults that are unique to partial scan designs and propose a check to identify and contain them. Our final test coverage of the partial scan design is within 0.1% of the full scan test coverage for both stuck-at and transition delay fault models. In addition, we present the PPA (power, performance and area) results for both the full scan and partial scan designs. The most noteworthy improvement is seen in the hold total negative slack

    Railway Transport Planning and Management

    Get PDF
    Railway engineering is facing different and complex challenges due to the growing demand for travel, new technologies, and new mobility paradigms. All these issues require a clear understanding of the existing technologies, and it is crucial to identify the real opportunities that the current technological revolution may pose. As railway transportation planning processes change and pursue a multi-objective vision, diagnostic and maintenance issues are becoming even more crucial for overall system performances and alternative fuel solutions

    Cross-layer Soft Error Analysis and Mitigation at Nanoscale Technologies

    Get PDF
    This thesis addresses the challenge of soft error modeling and mitigation in nansoscale technology nodes and pushes the state-of-the-art forward by proposing novel modeling, analyze and mitigation techniques. The proposed soft error sensitivity analysis platform accurately models both error generation and propagation starting from a technology dependent device level simulations all the way to workload dependent application level analysis
    corecore