22,272 research outputs found

    Edge-Fault Tolerance of Hypercube-like Networks

    Full text link
    This paper considers a kind of generalized measure Ξ»s(h)\lambda_s^{(h)} of fault tolerance in a hypercube-like graph GnG_n which contain several well-known interconnection networks such as hypercubes, varietal hypercubes, twisted cubes, crossed cubes and M\"obius cubes, and proves Ξ»s(h)(Gn)=2h(nβˆ’h)\lambda_s^{(h)}(G_n)= 2^h(n-h) for any hh with 0β©½hβ©½nβˆ’10\leqslant h\leqslant n-1 by the induction on nn and a new technique. This result shows that at least 2h(nβˆ’h)2^h(n-h) edges of GnG_n have to be removed to get a disconnected graph that contains no vertices of degree less than hh. Compared with previous results, this result enhances fault-tolerant ability of the above-mentioned networks theoretically

    Hashing based Answer Selection

    Full text link
    Answer selection is an important subtask of question answering (QA), where deep models usually achieve better performance. Most deep models adopt question-answer interaction mechanisms, such as attention, to get vector representations for answers. When these interaction based deep models are deployed for online prediction, the representations of all answers need to be recalculated for each question. This procedure is time-consuming for deep models with complex encoders like BERT which usually have better accuracy than simple encoders. One possible solution is to store the matrix representation (encoder output) of each answer in memory to avoid recalculation. But this will bring large memory cost. In this paper, we propose a novel method, called hashing based answer selection (HAS), to tackle this problem. HAS adopts a hashing strategy to learn a binary matrix representation for each answer, which can dramatically reduce the memory cost for storing the matrix representations of answers. Hence, HAS can adopt complex encoders like BERT in the model, but the online prediction of HAS is still fast with a low memory cost. Experimental results on three popular answer selection datasets show that HAS can outperform existing models to achieve state-of-the-art performance
    • …
    corecore