10 research outputs found

    Critical Analysis of Various Symmetric Key Cryptographic Algorithms

    Get PDF
    Current era is digital and has elite advancement in technology, almost in every field things are becoming technology friendly. It is the need to provide security to sensitive information in this fast growing technical world. Cryptography is one of the very popular fields of network security that deals with conversion of sensitive data into one which could not be understood by anyone without the secret key. Many researchers has worked in the area of cryptography and developed many algorithms in different period and providing security to the travelling data. It is need to know the strengths and weakness of each algorithm before using and suggesting to anybody to use one. After critically analyzing existing standard cryptographic algorithms on parameters like throughput, power consumption and memory usage on DES, TDES, AES, Blowfish, Twofish, Threefish, RC2, RC4, RC5 and RC6, it has been concluded that which algorithm will suit in which situation. DOI: 10.17762/ijritcc2321-8169.150616

    Diversity driven Attention Model for Query-based Abstractive Summarization

    Full text link
    Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.Comment: Accepted at ACL 201

    The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT

    Full text link
    Multi-headed attention heads are a mainstay in transformer-based models. Different methods have been proposed to classify the role of each attention head based on the relations between tokens which have high pair-wise attention. These roles include syntactic (tokens with some syntactic relation), local (nearby tokens), block (tokens in the same sentence) and delimiter (the special [CLS], [SEP] tokens). There are two main challenges with existing methods for classification: (a) there are no standard scores across studies or across functional roles, and (b) these scores are often average quantities measured across sentences without capturing statistical significance. In this work, we formalize a simple yet effective score that generalizes to all the roles of attention heads and employs hypothesis testing on this score for robust inference. This provides us the right lens to systematically analyze attention heads and confidently comment on many commonly posed questions on analyzing the BERT model. In particular, we comment on the co-location of multiple functional roles in the same attention head, the distribution of attention heads across layers, and effect of fine-tuning for specific NLP tasks on these functional roles.Comment: accepted at AAAI 2021 (Main conference
    corecore