591,365 research outputs found

    Design of High-Speed Multiplier with Optimised Builtinself-Test

    Get PDF
    Current trend in Integrated Circuits (IC) implementation such as System-on-Chip has contributed significant advantages in electronic product features such as high circuit performance with high number of functions, small physical area and high reliability. Since the development of System-on-Chip, which is based on integrating subsystems supplied by various Intellectual Properties (IP) Block vendors, the required design time is shorter when compared to that of full-custom IC implementation. However, testing each internal subsystems using the common scan-path method where test data are generated and analyzed externally is considered too time consuming when the number of subsystems is high. Therefore, by including Built-In-Self-Test (BIST) facility into each subsystem is considered a good solution. Commonly, BIST structure is based on random test data generation from a Linear Feedback Shift Register (LFSR) due to its simple, small and economical circuit structure. Since t he number of subsystems in an IC chip is going to be increased from time to time, improvement on the BIST approach is required to provide shorter testing time while keeping the good features of LFSR. For this reason, development of test pattern for BIST based on combination of LFSR and deterministic approach could provide one of the solutions to reduce the testing time. In this research, the possibility of combining LFSR features and deterministic test pattern was carried out. A parallel high-speed multiplier considered as one of the demanding subsystems was chosen to verify the proposed BIST performance. Results show that the testing time (with 100% fault coverage) was reduced significantly when compared to the testing time taken for the BIST that was totally based on random test data generation. One of the reasons for this achievement is only one basic cell of the multiplier is required to determine the test pattern by considering the data flow from one cell to another. Identical test data can then be applied to both multiplier inputs simultaneously. This is the significant finding of the research. Further works based on the finding are also identified

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    Testing reactive systems with data : enumerative methods and constraint solving

    Get PDF
    Software faults are a well-known phenomenon. In most cases, they are just annoying – if the computer game does not work as expected – or expensive – if once again a space project fails due to some faulty data conversion. In critical systems, however, faults can have life-threatening consequences. It is the task of software quality assurance to avoid such faults, but this is a cumbersome, expensive and also erroneous undertaking. For this reason, research has been done over the last years in order to automate this task as much as possible. In this thesis, the connection of constraint solving techniques with formal methods is investigated. We have the goal to ïżœïżœ?nd faults in the models and implementations of reactive systems with data, such as automatic teller machines (ATMs). In order to do so, we ïżœïżœ?rst develop a translation of formal speciïżœïżœ?cations in the process algebra ”CRL to a constraint logic program (CLP). In the course of this translation, we pay special attention on the fact that the CLP together with the constraint solver correctly simulates the underlying term rewriting system. One way to validate a system is the test whether this system conforms its speciïżœïżœ?cation. In this thesis, we develop a test process to automatically generate and execute test cases for the conformance test of data-oriented systems. The applicability of this process to process-oriented software systems is demonstrated in a case study with an ATM as the system under test. The applicability of the process to document-centered applications is shown by means of the open source web browser Mozilla Firefox. The test process is partially based on the tool TGV, which is an enumerative test case generator. It generates test cases from a system speciïżœïżœ?cation and a test purpose. An enumerative approach to the analysis of system speciïżœïżœ?cations always tries to enumerate all possible combinations of values for the system’s data elements, i.e. the system’s states. The states of those systems, which we regard here, are inïŹ‚uenced by data of possibly inïżœïżœ?nite domains. Hence, the state space of such systems grows beyond all limits, it explodes, and cannot be handled anymore by enumerative algorithms. For this reason, the state space is limited prior to test case generation by a data abstraction. We use a chaotic abstraction here with all possible input data from a system’s environment being replaced by a single constant. In parallel, we generate a CLP from the system speciïżœïżœ?cation. With this CLP, we reintroduce the actual data at the time of test execution. This approach does not only limit the state space of the system, but also leads to a separation of system behavior and data. This allows to reuse test cases by only varying their data parameters. In the developed process, tests are executed by the tool BAiT. This tool has also been created in the course of this thesis. Some systems do not always show an identical behavior under the same circumstances. This phenomenon is known as nondeterminism. There are many reasons for nondeterminism. In most cases, input froma system’s environment is asynchronously processed by several components of the system, which do not always terminate in the same order. BAiT works as follows: The tool chooses a trace through the system behavior from the set of traces in the generated test cases. Then, it parameterizes this trace with data and tries to execute it. When the nondeterministic system digresses from the selected trace, BAiT tries to appropriately adapt it. If this can be done according to the system speciïżœïżœ?cation, the test can be executed further and a possibly false positive test verdict has been successfully avoided. The test of an implementation signiïżœïżœ?cantly reduces the numbers of faults in a system. However, the system is only tested against its speciïżœïżœ?cation. In many cases, this speciïżœïżœ?cation already does not completely fulïżœïżœ?ll a customer ’s expectations. In order to reduce the risk for faults further, the models of the system themselves also have to be veriïżœïżœ?ed. This happens during model checking prior to testing the software. Again, the explosion of the state space of the system must be avoided by a suitable abstraction of the models. A consequence of model abstractions in the context of model checking are so-called false negatives. Those traces are counterexamples which point out a fault in the abstracted model, but who do not exist in the concrete one. Usually, these false negatives are ignored. In this thesis, we also develop a methodology to reuse the knowledge of potential faults by abstracting the counterexamples further and deriving a violation pattern from it. Afterwards, we search for a concrete counterexample utilizing a constraint solver

    Smart Distributed Generation System Event Classification using Recurrent Neural Network-based Long Short-term Memory

    Get PDF
    High penetration of distributed generation (DG) sources into a decentralized power system causes several disturbances, making the monitoring and operation control of the system complicated. Moreover, because of being passive, modern DG systems are unable to detect and inform about these disturbances related to power quality in an intelligent approach. This paper proposed an intelligent and novel technique, capable of making real-time decisions on the occurrence of different DG events such as islanding, capacitor switching, unsymmetrical faults, load switching, and loss of parallel feeder and distinguishing these events from the normal mode of operation. This event classification technique was designed to diagnose the distinctive pattern of the time-domain signal representing a measured electrical parameter, like the voltage, at DG point of common coupling (PCC) during such events. Then different power system events were classified into their root causes using long short-term memory (LSTM), which is a deep learning algorithm for time sequence to label classification. A total of 1100 events showcasing islanding, faults, and other DG events were generated based on the model of a smart distributed generation system using a MATLAB/Simulink environment. Classifier performance was calculated using 5-fold cross-validation. The genetic algorithm (GA) was used to determine the optimum value of classification hyper-parameters and the best combination of features. The simulation results indicated that the events were classified with high precision and specificity with ten cycles of occurrences while achieving a 99.17% validation accuracy. The performance of the proposed classification technique does not degrade with the presence of noise in test data, multiple DG sources in the model, and inclusion of motor starting event in training samples

    OpenForensics:a digital forensics GPU pattern matching approach for the 21st century

    Get PDF
    Pattern matching is a crucial component employed in many digital forensic (DF) analysis techniques, such as file-carving. The capacity of storage available on modern consumer devices has increased substantially in the past century, making pattern matching approaches of current generation DF tools increasingly ineffective in performing timely analyses on data seized in a DF investigation. As pattern matching is a trivally parallelisable problem, general purpose programming on graphic processing units (GPGPU) is a natural fit for this problem. This paper presents a pattern matching framework - OpenForensics - that demonstrates substantial performance improvements from the use of modern parallelisable algorithms and graphic processing units (GPUs) to search for patterns within forensic images and local storage devices

    Description of the Chinese-to-Spanish rule-based machine translation system developed with a hybrid combination of human annotation and statistical techniques

    Get PDF
    Two of the most popular Machine Translation (MT) paradigms are rule based (RBMT) and corpus based, which include the statistical systems (SMT). When scarce parallel corpus is available, RBMT becomes particularly attractive. This is the case of the Chinese--Spanish language pair. This article presents the first RBMT system for Chinese to Spanish. We describe a hybrid method for constructing this system taking advantage of available resources such as parallel corpora that are used to extract dictionaries and lexical and structural transfer rules. The final system is freely available online and open source. Although performance lags behind standard SMT systems for an in-domain test set, the results show that the RBMT’s coverage is competitive and it outperforms the SMT system in an out-of-domain test set. This RBMT system is available to the general public, it can be further enhanced, and it opens up the possibility of creating future hybrid MT systems.Peer ReviewedPostprint (author's final draft
    • 

    corecore