49,701 research outputs found

    A design for testability study on a high performance automatic gain control circuit.

    Get PDF
    A comprehensive testability study on a commercial automatic gain control circuit is presented which aims to identify design for testability (DfT) modifications to both reduce production test cost and improve test quality. A fault simulation strategy based on layout extracted faults has been used to support the study. The paper proposes a number of DfT modifications at the layout, schematic and system levels together with testability. Guidelines that may well have generic applicability. Proposals for using the modifications to achieve partial self test are made and estimates of achieved fault coverage and quality levels presente

    SRAT-Distribution Voltage Sags and Reliability Assessment Tool

    Get PDF
    Interruptions to supply and sags of distribution system voltage are the main aspects causing customer complaints. There is a need for analysis of supply reliability and voltage sag to relate system performance with network structure and equipment design parameters. This analysis can also give prediction of voltage dips, as well as relating traditional reliability and momentary outage measures to the properties of protection systems and to network impedances. Existing reliability analysis software often requires substantial training, lacks automated facilities, and suffers from data availability. Thus it requires time-consuming manual intervention for the study of large networks. A user-friendly sag and reliability assessment tool (SRAT) has been developed based on existing impedance data, protection characteristics, and a model of failure probability. The new features included in SRAT are a) efficient reliability and sag assessments for a radial network with limited loops, b) reliability evaluation associated with realistic protection and restoration schemes, c) inclusion of momentary outages in the same model as permanent outage evaluation, d) evaluation of the sag transfer through meshed subtransmission network, and e) simplified probability distribution model determined from available faults records. Examples of the application of the tools to an Australian distribution network are used to illustrate the application of this model

    Mutation Testing as a Safety Net for Test Code Refactoring

    Full text link
    Refactoring is an activity that improves the internal structure of the code without altering its external behavior. When performed on the production code, the tests can be used to verify that the external behavior of the production code is preserved. However, when the refactoring is performed on test code, there is no safety net that assures that the external behavior of the test code is preserved. In this paper, we propose to adopt mutation testing as a means to verify if the behavior of the test code is preserved after refactoring. Moreover, we also show how this approach can be used to identify the part of the test code which is improperly refactored

    Sources of uncertainties and artefacts in back-projection results

    Get PDF
    Back-projecting high-frequency (HF) waves is a common procedure for imaging rupture processes of large earthquakes (i.e. M_w > 7.0). However, obtained back-projection (BP) results could suffer from large uncertainties since high-frequency seismic waveforms are strongly affected by factors like source depth, focal mechanisms, and the Earth's 3-D velocity structures. So far, these uncertainties have not been thoroughly investigated. Here, we use synthetic tests to investigate the influencing factors for which scenarios with various source and/or velocity set-ups are designed, using either Tohoku-Oki (Japan), Kaikoura (New Zealand), Java/Wharton Basin (Indonesia) as test areas. For the scenarios, we generate either 1-D or 3-D teleseismic synthetic data, which are then back-projected using a representative BP method, MUltiple SIgnal Classification (MUSIC). We also analyse corresponding real cases to verify the synthetic test results. The Tohoku-Oki scenario shows that depth phases of a point source can be back-projected as artefacts at their bounce points on the earth's surface, with these artefacts located far away from the epicentre if earthquakes occur at large depths, which could significantly contaminate BP images of large intermediate-depth earthquakes. The Kaikoura scenario shows that for complicated earthquakes, composed of multiple subevents with varying focal mechanisms, BP tends to image subevents emanating large amplitude coherent waveforms, while missing subevents whose P nodal directions point to the arrays, leading to discrepancies either between BP images from different arrays, or between BP images and other source models. Using the Java event, we investigate the impact of 3-D source-side velocity structures. The 3-D bathymetry together with a water layer can generate strong and long-lasting coda waves, which are mirrored as artefacts far from the true source location. Finally, we use a Wharton Basin outer-rise event to show that the wavefields generated by 3-D near trench structures contain frequency-dependent coda waves, leading to frequency-dependent BP results. In summary, our analyses indicate that depth phases, focal mechanism variations and 3-D source-side structures can affect various aspects of BP results. Thus, we suggest that target-oriented synthetic tests, for example, synthetic tests for subduction earthquakes using more realistic 3-D source-side velocity structures, should be conducted to understand the uncertainties and artefacts before we interpret detailed BP images to infer earthquake rupture kinematics and dynamics

    Analysis of the Correlation Between Majority Voting Error and the Diversity Measures in Multiple Classifier Systems

    Get PDF
    Combining classifiers by majority voting (MV) has recently emerged as an effective way of improving performance of individual classifiers. However, the usefulness of applying MV is not always observed and is subject to distribution of classification outputs in a multiple classifier system (MCS). Evaluation of MV errors (MVE) for all combinations of classifiers in MCS is a complex process of exponential complexity. Reduction of this complexity can be achieved provided the explicit relationship between MVE and any other less complex function operating on classifier outputs is found. Diversity measures operating on binary classification outputs (correct/incorrect) are studied in this paper as potential candidates for such functions. Their correlation with MVE, interpreted as the quality of a measure, is thoroughly investigated using artificial and real-world datasets. Moreover, we propose new diversity measure efficiently exploiting information coming from the whole MCS, rather than its part, for which it is applied

    Formal Analysis of CRT-RSA Vigilant's Countermeasure Against the BellCoRe Attack: A Pledge for Formal Methods in the Field of Implementation Security

    Full text link
    In our paper at PROOFS 2013, we formally studied a few known countermeasures to protect CRT-RSA against the BellCoRe fault injection attack. However, we left Vigilant's countermeasure and its alleged repaired version by Coron et al. as future work, because the arithmetical framework of our tool was not sufficiently powerful. In this paper we bridge this gap and then use the same methodology to formally study both versions of the countermeasure. We obtain surprising results, which we believe demonstrate the importance of formal analysis in the field of implementation security. Indeed, the original version of Vigilant's countermeasure is actually broken, but not as much as Coron et al. thought it was. As a consequence, the repaired version they proposed can be simplified. It can actually be simplified even further as two of the nine modular verifications happen to be unnecessary. Fortunately, we could formally prove the simplified repaired version to be resistant to the BellCoRe attack, which was considered a "challenging issue" by the authors of the countermeasure themselves.Comment: arXiv admin note: substantial text overlap with arXiv:1401.817

    The Detection of Defects in a Niobium Tri-layer Process

    Get PDF
    Niobium (Nb) LTS processes are emerging as the technology for future ultra high-speed systems especially in the digital domain. As the number of Josephson Junctions (JJ) per chip has recently increased to around 90000, the quality of the process has to be assured so as to realize these complex circuits. Until now, very little or no information is available in the literature on how to achieve this. In this paper we present an approach and results of a study conducted on an RSFQ process. Measurements and SEM inspection were carried out on sample chips and a list of possible defects has been identified and described in detail. We have also developed test-structures for detection of the top-ranking defects, which will be used for yield analysis and the determination of the probability distribution of faults in the process. A test chip has been designed, based on the results of this study, and certain types of defects were introduced in the design to study the behavior of faulty junctions and interconnections

    Constraints on the active tectonics of the Friuli/NW Slovenia area from CGPS measurements and three-dimensional kinematic modeling

    Get PDF
    We use site velocities from continuous GPS (CGPS) observations and kinematic modeling to investigate the active tectonics of the Friuli/NW Slovenia area. Data from 42 CGPS stations around the Adriatic indicate an oblique collision, with southern Friuli moving NNW toward northern Friuli at the relative speed of 1.6 to 2.2 mm/a. We investigate the active tectonics using 3DMove, a three-dimensional kinematic model tool. The model consists of one indenter-shaped fault plane that approximates the Adriatic plate boundary. Using the ‘‘fault-parallel flow’’ deformation algorithm, we move the hanging wall along the fault plane in the direction indicated by the GPS velocities. The resulting strain field is used for structural interpretation. We identify a pattern of coincident strain maxima and high vorticity that correlates well with groups of hypocenters of major earthquakes (including their aftershocks) and indicates the orientation of secondary, active faults. The pattern reveals structures both parallel and perpendicular to the strike of the primary fault. In the eastern sector, which shows more complex tectonics, these two sets of faults probably form an interacting strike-slip system
    • …
    corecore