486 research outputs found

    Answer Sequence Learning with Neural Networks for Answer Selection in Community Question Answering

    Full text link
    In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of question-answer pair firstly, and then uses the joint representation as input of the long short-term memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.Comment: 6 page

    Hesitant Triangular Fuzzy Information Aggregation Operators Based on Bonferroni Means and Their Application to Multiple Attribute Decision Making

    Get PDF
    We investigate the multiple attribute decision-making (MADM) problems with hesitant triangular fuzzy information. Firstly, definition and some operational laws of hesitant triangular fuzzy elements are introduced. Then, we develop some hesitant triangular fuzzy aggregation operators based on Bonferroni means and discuss their basic properties. Some existing operators can be viewed as their special cases. Next, we apply the proposed operators to deal with multiple attribute decision-making problems under hesitant triangular fuzzy environment. Finally, an illustrative example is given to show the developed method and demonstrate its practicality and effectiveness

    Reliability model of organization management chain of South-to-North Water Diversion Project during construction period

    Get PDF
    AbstractIn order to analyze the indispensability of the organization management chain of the South-to-North Water Diversion Project (SNWDP), two basic forms (series connection state and mixed state of both series connection and parallel connection) of the organization management chain can be abstracted. The indispensability of each form has been studied and is described in this paper. Through analysis of the reliability of the two basic forms, reliability models of the organization management chain in the series connection state and the mixed state of both series connection and parallel connection have been set up

    Vision Transformer with Super Token Sampling

    Full text link
    Vision transformer has achieved impressive performance for many vision tasks. However, it may suffer from high redundancy in capturing local features for shallow layers. Local self-attention or early-stage convolutions are thus utilized, which sacrifice the capacity to capture long-range dependency. A challenge then arises: can we access efficient and effective global context modeling at the early stages of a neural network? To address this issue, we draw inspiration from the design of superpixels, which reduces the number of image primitives in subsequent processing, and introduce super tokens into vision transformer. Super tokens attempt to provide a semantically meaningful tessellation of visual content, thus reducing the token number in self-attention as well as preserving global modeling. Specifically, we propose a simple yet strong super token attention (STA) mechanism with three steps: the first samples super tokens from visual tokens via sparse association learning, the second performs self-attention on super tokens, and the last maps them back to the original token space. STA decomposes vanilla global attention into multiplications of a sparse association map and a low-dimensional attention, leading to high efficiency in capturing global dependencies. Based on STA, we develop a hierarchical vision transformer. Extensive experiments demonstrate its strong performance on various vision tasks. In particular, without any extra training data or label, it achieves 86.4% top-1 accuracy on ImageNet-1K with less than 100M parameters. It also achieves 53.9 box AP and 46.8 mask AP on the COCO detection task, and 51.9 mIOU on the ADE20K semantic segmentation task. Code will be released at https://github.com/hhb072/SViT.Comment: 12 pages, 4 figures, 8 table
    • …
    corecore