1,641,807 research outputs found

    A multi-candidate electronic voting scheme with unlimited participants

    Full text link
    In this paper a new multi-candidate electronic voting scheme is constructed with unlimited participants. The main idea is to express a ballot to allow voting for up to k out of the m candidates and unlimited participants. The purpose of vote is to select more than one winner among mm candidates. Our result is complementary to the result by Sun peiyong' s scheme, in the sense, their scheme is not amenable for large-scale electronic voting due to flaw of ballot structure. In our scheme the vote is split and hidden, and tallying is made for Go¨delG\ddot{o}del encoding in decimal base without any trusted third party, and the result does not rely on any traditional cryptography or computational intractable assumption. Thus the proposed scheme not only solves the problem of ballot structure, but also achieves the security including perfect ballot secrecy, receipt-free, robustness, fairness and dispute-freeness.Comment: 6 page

    Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

    Full text link
    While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.Comment: Appeared in ACL2017 proceedings as a long paper. Correct a calculation mistake in Table 1 E-bow & A-bow and results into higher score
    corecore