298,065 research outputs found

    Multi-turn Inference Matching Network for Natural Language Inference

    Full text link
    Natural Language Inference (NLI) is a fundamental and challenging task in Natural Language Processing (NLP). Most existing methods only apply one-pass inference process on a mixed matching feature, which is a concatenation of different matching features between a premise and a hypothesis. In this paper, we propose a new model called Multi-turn Inference Matching Network (MIMN) to perform multi-turn inference on different matching features. In each turn, the model focuses on one particular matching feature instead of the mixed matching feature. To enhance the interaction between different matching features, a memory component is employed to store the history inference information. The inference of each turn is performed on the current matching feature and the memory. We conduct experiments on three different NLI datasets. The experimental results show that our model outperforms or achieves the state-of-the-art performance on all the three datasets

    The simplification of fuzzy control algorithm and hardware implementation

    Get PDF
    The conventional interface composition algorithm of a fuzzy controller is very time and memory consuming. As a result, it is difficult to do real time fuzzy inference, and most fuzzy controllers are realized by look-up tables. Here, researchers derive a simplified algorithm using the defuzzification mean of maximum. This algorithm takes shorter computation time and needs less memory usage, thus making it possible to compute the fuzzy inference on real time and easy to tune the control rules on line. A hardware implementation based on a simplified fuzzy inference algorithm is described

    Energy-Efficient Inference Accelerator for Memory-Augmented Neural Networks on an FPGA

    Full text link
    Memory-augmented neural networks (MANNs) are designed for question-answering tasks. It is difficult to run a MANN effectively on accelerators designed for other neural networks (NNs), in particular on mobile devices, because MANNs require recurrent data paths and various types of operations related to external memory access. We implement an accelerator for MANNs on a field-programmable gate array (FPGA) based on a data flow architecture. Inference times are also reduced by inference thresholding, which is a data-based maximum inner-product search specialized for natural language tasks. Measurements on the bAbI data show that the energy efficiency of the accelerator (FLOPS/kJ) was higher than that of an NVIDIA TITAN V GPU by a factor of about 125, increasing to 140 with inference thresholdingComment: Accepted to DATE 201

    Trimming and tapering semi-parametric estimates in asymmetric long memory time series

    Get PDF
    This paper considers semi-parametric frequency domain inference for seasonal or cyclical time series with asymmetric long memory properties. It is shown that tapering the data reduces the bias caused by the asymmetry of the spectral density at the cyclical frequency. We provide a joint treatment of different tapering schemes and of the log-periodogram regression and Gaussian semi-parametric estimates of the memory parameters. Tapering allows for a less restrictive trimming of frequencies for the analysis of the asymptotic properties of both estimates when allowing for asymmetries. Simple rules for inference are feasible thanks to tapering and their validity in finite samples is investigated in a simulation exercise and for an empirical example.Publicad
    corecore