202,492 research outputs found

    A Neural Network Architecture for Syntax Analysis

    Get PDF
    Artificial neural networks (ANNs), due to their inherent parallelism and potential fault tolerance, offer an attractive paradigm for robust and efficient implementations of syntax analyzers. This paper proposes a modular neural network architecture for syntax analysis on continuous input stream of characters. The components of the proposed architecture include neural network designs for a stack, a lexical analyzer, a grammar parser and a parse tree construction module. The proposed NN stack allows simulation of a stack of large depth, needs no training, and hence is not application-specific. The proposed NN lexical analyzer provides a relatively efficient and high performance alternative to current computer systems for lexical analysis especially in natural language processing applications. The proposed NN parser generates parse trees by parsing strings from widely used subsets of deterministic context-free languages (generated by LR grammars). The estimated performance of the proposed neural network architecture (based on current CMOS VLSI technology) for syntax analysis is compared with that of commonly used approaches to syntax analysis in current computer systems. The results of this performance comparison suggest that the proposed neural network architecture offers an attractive approach for syntax analysis in a wide range of practical applications such as programming language compilation and natural language processing

    Cascaded face detection using neural network ensembles

    Get PDF
    We propose a fast face detector using an efficient architecture based on a hierarchical cascade of neural network ensembles with which we achieve enhanced detection accuracy and efficiency. First, we propose a way to form a neural network ensemble by using a number of neural network classifiers, each of which is specialized in a subregion in the face-pattern space. These classifiers complement each other and, together, perform the detection task. Experimental results show that the proposed neural-network ensembles significantly improve the detection accuracy as compared to traditional neural-network-based techniques. Second, in order to reduce the total computation cost for the face detection, we organize the neural network ensembles in a pruning cascade. In this way, simpler and more efficient ensembles used at earlier stages in the cascade are able to reject a majority of nonface patterns in the image backgrounds, thereby significantly improving the overall detection efficiency while maintaining the detection accuracy. An important advantage of the new architecture is that it has a homogeneous structure so that it is suitable for very efficient implementation using programmable devices. Our proposed approach achieves one of the best detection accuracies in literature with significantly reduced training and detection cost

    Neural Inheritance Relation Guided One-Shot Layer Assignment Search

    Full text link
    Layer assignment is seldom picked out as an independent research topic in neural architecture search. In this paper, for the first time, we systematically investigate the impact of different layer assignments to the network performance by building an architecture dataset of layer assignment on CIFAR-100. Through analyzing this dataset, we discover a neural inheritance relation among the networks with different layer assignments, that is, the optimal layer assignments for deeper networks always inherit from those for shallow networks. Inspired by this neural inheritance relation, we propose an efficient one-shot layer assignment search approach via inherited sampling. Specifically, the optimal layer assignment searched in the shallow network can be provided as a strong sampling priori to train and search the deeper ones in supernet, which extremely reduces the network search space. Comprehensive experiments carried out on CIFAR-100 illustrate the efficiency of our proposed method. Our search results are strongly consistent with the optimal ones directly selected from the architecture dataset. To further confirm the generalization of our proposed method, we also conduct experiments on Tiny-ImageNet and ImageNet. Our searched results are remarkably superior to the handcrafted ones under the unchanged computational budgets. The neural inheritance relation discovered in this paper can provide insights to the universal neural architecture search.Comment: AAAI202
    • …
    corecore