1,834 research outputs found

    Transition-fault test generation

    Get PDF
    Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to [email protected], referencing the URI of the item.Includes bibliographical references (leaf 18).After an integrated circuit is manufactured, it must be tested to insure that it is not defective. Specifically, timing defects are becoming increasingly important to detect because of the decreasing process geometries and increasing clock rates. One way to detect these timing defects is to apply test patterns to the integrated circuit that are generated using the transition-fault model. Unfortunately, industry's current transition-fault test generation schemes produce test sets that are too large to store in the memory of the tester. The proposed methods of test generation utilize stuck-at-fault tests to create transition-fault test sets of a smaller size. Greedy algorithms are used in the generation of both the stuck-at-fault and transition-fault tests. In addition, various methods of test set compaction are explored to further reduce the size of the test sets. This research demonstrates an effective way to generate compact transition-fault test sets for a benchmark circuit and holds great promise for application to large commercial circuits

    Towards Effective Codebookless Model for Image Classification

    Full text link
    The bag-of-features (BoF) model for image classification has been thoroughly studied over the last decade. Different from the widely used BoF methods which modeled images with a pre-trained codebook, the alternative codebook free image modeling method, which we call Codebookless Model (CLM), attracted little attention. In this paper, we present an effective CLM that represents an image with a single Gaussian for classification. By embedding Gaussian manifold into a vector space, we show that the simple incorporation of our CLM into a linear classifier achieves very competitive accuracy compared with state-of-the-art BoF methods (e.g., Fisher Vector). Since our CLM lies in a high dimensional Riemannian manifold, we further propose a joint learning method of low-rank transformation with support vector machine (SVM) classifier on the Gaussian manifold, in order to reduce computational and storage cost. To study and alleviate the side effect of background clutter on our CLM, we also present a simple yet effective partial background removal method based on saliency detection. Experiments are extensively conducted on eight widely used databases to demonstrate the effectiveness and efficiency of our CLM method

    Application of artificial intelligence to evaluate the fresh properties of self-consolidating concrete

    Get PDF
    This paper numerically investigates the required superplasticizer (SP) demand for self-consolidating concrete (SCC) as a valuable information source to obtain a durable SCC. In this regard, an adaptive neuro-fuzzy inference system (ANFIS) is integrated with three metaheuristic algorithms to evaluate a dataset from non-destructive tests. Hence, five different non-destructive testing methods, including J-ring test, V-funnel test, U-box test, 3 min slump value and 50 min slump (T50) value were performed. Then, three metaheuristic algorithms, namely particle swarm optimization (PSO), ant colony optimization (ACO) and differential evolution optimization (DEO), were considered to predict the SP demand of SCC mixtures. To compare the optimization algorithms, ANFIS parameters were kept constant (clusters = 10, train samples = 70% and test samples = 30%). The metaheuristic parameters were adjusted, and each algorithm was tuned to attain the best performance. In general, it was found that the ANFIS method is a good base to be combined with other optimization algorithms. The results indicated that hybrid algorithms (ANFIS-PSO, ANFIS-DEO and ANFIS-ACO) can be used as reliable prediction methods and considered as an alternative for experimental techniques. In order to perform a reliable analogy of the developed algorithms, three evaluation criteria were employed, including root mean square error (RMSE), Pearson correlation coefficient (r) and determination regression coefficient (R2). As a result, the ANFIS-PSO algorithm represented the most accurate prediction of SP demand with RMSE = 0.0633, r = 0.9387 and R2 = 0.9871 in the testing phase

    PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform

    Full text link
    Computing with high-dimensional (HD) vectors, also referred to as hypervectors\textit{hypervectors}, is a brain-inspired alternative to computing with scalars. Key properties of HD computing include a well-defined set of arithmetic operations on hypervectors, generality, scalability, robustness, fast learning, and ubiquitous parallel operations. HD computing is about manipulating and comparing large patterns-binary hypervectors with 10,000 dimensions-making its efficient realization on minimalistic ultra-low-power platforms challenging. This paper describes HD computing's acceleration and its optimization of memory accesses and operations on a silicon prototype of the PULPv3 4-core platform (1.5mm2^2, 2mW), surpassing the state-of-the-art classification accuracy (on average 92.4%) with simultaneous 3.7×\times end-to-end speed-up and 2×\times energy saving compared to its single-core execution. We further explore the scalability of our accelerator by increasing the number of inputs and classification window on a new generation of the PULP architecture featuring bit-manipulation instruction extensions and larger number of 8 cores. These together enable a near ideal speed-up of 18.4×\times compared to the single-core PULPv3

    Histone H1 Subtypes Differentially Modulate Chromatin Condensation without Preventing ATP-Dependent Remodeling by SWI/SNF or NURF

    Get PDF
    Although ubiquitously present in chromatin, the function of the linker histone subtypes is partly unknown and contradictory studies on their properties have been published. To explore whether the various H1 subtypes have a differential role in the organization and dynamics of chromatin we have incorporated all of the somatic human H1 subtypes into minichromosomes and compared their influence on nucleosome spacing, chromatin compaction and ATP-dependent remodeling. H1 subtypes exhibit different affinities for chromatin and different abilities to promote chromatin condensation, as studied with the Atomic Force Microscope. According to this criterion, H1 subtypes can be classified as weak condensers (H1.1 and H1.2), intermediate condensers (H1.3) and strong condensers (H1.0, H1.4, H1.5 and H1x). The variable C-terminal domain is required for nucleosome spacing by H1.4 and is likely responsible for the chromatin condensation properties of the various subtypes, as shown using chimeras between H1.4 and H1.2. In contrast to previous reports with isolated nucleosomes or linear nucleosomal arrays, linker histones at a ratio of one per nucleosome do not preclude remodeling of minichromosomes by yeast SWI/SNF or Drosophila NURF. We hypothesize that the linker histone subtypes are differential organizers of chromatin, rather than general repressors

    Optimization approaches for standard repairs in a concrete element

    Get PDF
    After extreme events, structural elements can result damaged and repairs are needed to restore safe functionality of structure. This thesis investigates the design approaches for typical repairs in concrete elements. A methodology for automatic design and cost estimation of common elements will be developed, which is useful for decision taking in performance-based design

    Efficient Test Compaction for Combinational Circuits Based on Fault Detection Count-Directed Clustering

    Get PDF
    Test compaction is an effective technique for reducing test data volume and test application time. In this paper, we present a new static test compaction algorithm based on test vector decomposition and clustering. Test vectors are decomposed and clustered in an increasing order of faults detection count. This clustering order gives more degree of freedom and results in better compaction. Experimental results demonstrate the effectiveness of the proposed approach in achieving higher compaction in a much more efficient CPU time than previous clustering-based test compaction approaches

    Efficient Test Compaction for Combinational Circuits Based on Fault Detection Count-Directed Clustering

    Get PDF
    Test compaction is an effective technique for reducing test data volume and test application time. In this paper, we present a new static test compaction algorithm based on test vector decomposition and clustering. Test vectors are decomposed and clustered in an increasing order of faults detection count. This clustering order gives more degree of freedom and results in better compaction. Experimental results demonstrate the effectiveness of the proposed approach in achieving higher compaction in a much more efficient CPU time than previous clustering-based test compaction approaches
    corecore