823 research outputs found

    Acceleration of Seed Ordering and Selection for High Quality Delay Test

    Get PDF
    Seed ordering and selection is a key technique to provide high-test quality with limited resources in Built-In Self Test (BIST) environment. We present a hard-to-detect delay fault selection method to accelerate the computation time in seed ordering and selection processes. This selection method can be used to restrict faults for test generation executed in an early stage in seed ordering and selection processes, and reduce a test pattern count and therefore a computation time. We evaluate the impact of the selection method both in deterministic BIST, where one test pattern is decoded from one seed, and mixed-mode BIST, where one seed is expanded to two or more patterns. The statistical delay quality level (SDQL) is adopted as test quality measure, to represent ability to detect small delay defects (SDDs). Experimental results show that our proposed method can significantly reduce computation time from 28% to 63% and base set seed counts from 21% to 67% while slightly sacrificing test quality

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Development and evaluation of a fault detection and identification scheme for the WVU YF-22 UAV using the artificial immune system approach

    Get PDF
    A failure detection and identification (FDI) scheme is developed for a small remotely controlled jet aircraft based on the Artificial Immune System (AIS) paradigm. Pilot-in-the-loop flight data are used to develop and test a scheme capable of identifying known and unknown aircraft actuator and sensor failures. Negative selection is used as the main mechanism for self/non-self definition; however, an alternative approach using positive selection to enhance performance is also presented. Tested failures include aileron and stabilator locked at trim and angular rate sensor bias. Hyper-spheres are chosen to represent detectors. Different definitions of distance for the matching rules are applied and their effect on the behavior of hyper-bodies is discussed. All the steps involved in the creation of the scheme are presented including design selections embedded in the different algorithms applied to generate the detectors set. The evaluation of the scheme is performed in terms of detection rate, false alarms, and detection time for normal conditions and upset conditions. The proposed detection scheme achieves good detection performance for all flight conditions considered. This approach proves promising potential to cope with the multidimensional characteristics of integrated/comprehensive detection for aircraft sub-system failures.;A preliminary performance comparison between an AIS based FDI scheme and a Neural Network and Floating Threshold based one is presented including groundwork on assessing possible improvements on pilot situational awareness aided by FDI schemes. Initial results favor the AIS approach to FDI due to its rather undemanding adaptation capabilities to new environments. The presence of the FDI scheme suggests benefits for the interaction between the pilot and the upset conditions by improving the accuracy of the identification of each particular failure and decreasing the detection delays

    The expedition of the Research Vessel "Polarstern" to the Arctic in 2010 (ARK-XXV/3)

    Get PDF

    Bio-Inspired Load Balancing In Large-Scale WSNs Using Pheromone Signalling

    Get PDF
    Wireless sensor networks (WSNs) consist of multiple, distributed nodes each with limited resources. With their strict resource constraints and application-specific characteristics, WSNs contain many challenging tradeoffs. This paper proposes a bioinspired load balancing approach, based on pheromone signalling mechanisms, to solve the tradeoff between service availability and energy consumption. We explore the performance consequences of the pheromone-based load balancing approach using (1) a system-level simulator, (2) deployment of real sensor testbeds to provide a competitive analysis of these evaluation methodologies. The effectiveness of the proposed algorithm is evaluated with different scenario parameters and the required performance evaluation techniques are investigated on case studies based on sound sensors

    High Quality Compact Delay Test Generation

    Get PDF
    Delay testing is used to detect timing defects and ensure that a circuit meets its timing specifications. The growing need for delay testing is a result of the advances in deep submicron (DSM) semiconductor technology and the increase in clock frequency. Small delay defects that previously were benign now produce delay faults, due to reduced timing margins. This research focuses on the development of new test methods for small delay defects, within the limits of affordable test generation cost and pattern count. First, a new dynamic compaction algorithm has been proposed to generate compacted test sets for K longest paths per gate (KLPG) in combinational circuits or scan-based sequential circuits. This algorithm uses a greedy approach to compact paths with non-conflicting necessary assignments together during test generation. Second, to make this dynamic compaction approach practical for industrial use, a recursive learning algorithm has been implemented to identify more necessary assignments for each path, so that the path-to-test-pattern matching using necessary assignments is more accurate. Third, a realistic low cost fault coverage metric targeting both global and local delay faults has been developed. The metric suggests the test strategy of generating a different number of longest paths for each line in the circuit while maintaining high fault coverage. The number of paths and type of test depends on the timing slack of the paths under this metric. Experimental results for ISCAS89 benchmark circuits and three industry circuits show that the pattern count of KLPG can be significantly reduced using the proposed methods. The pattern count is comparable to that of transition fault test, while achieving higher test quality. Finally, the proposed ATPG methodology has been applied to an industrial quad-core microprocessor. FMAX testing has been done on many devices and silicon data has shown the benefit of KLPG test

    SkyMapper Southern Survey: First Data Release (DR1)

    Full text link
    We present the first data release (DR1) of the SkyMapper Southern Survey, a hemispheric survey carried out with the SkyMapper Telescope at Siding Spring Observatory in Australia. Here, we present the survey strategy, data processing, catalogue construction and database schema. The DR1 dataset includes over 66,000 images from the Shallow Survey component, covering an area of 17,200 deg2^2 in all six SkyMapper passbands uvgrizuvgriz, while the full area covered by any passband exceeds 20,000 deg2^2. The catalogues contain over 285 million unique astrophysical objects, complete to roughly 18 mag in all bands. We compare our grizgriz point-source photometry with PanSTARRS1 DR1 and note an RMS scatter of 2%. The internal reproducibility of SkyMapper photometry is on the order of 1%. Astrometric precision is better than 0.2 arcsec based on comparison with Gaia DR1. We describe the end-user database, through which data are presented to the world community, and provide some illustrative science queries.Comment: 31 pages, 19 figures, 10 tables, PASA, accepte

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and
    corecore