6 research outputs found

    Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation

    Full text link
    Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through generating target words independently and simultaneously. However, in the context of non-autoregressive translation, the word-level cross-entropy loss cannot model the target-side sequential dependency properly, leading to its weak correlation with the translation quality. As a result, NAT tends to generate influent translations with over-translation and under-translation errors. In this paper, we propose to train NAT to minimize the Bag-of-Ngrams (BoN) difference between the model output and the reference sentence. The bag-of-ngrams training objective is differentiable and can be efficiently calculated, which encourages NAT to capture the target-side sequential dependency and correlates well with the translation quality. We validate our approach on three translation tasks and show that our approach largely outperforms the NAT baseline by about 5.0 BLEU scores on WMT14 En\leftrightarrowDe and about 2.5 BLEU scores on WMT16 En\leftrightarrowRo.Comment: AAAI 202

    The N-Grams Based Text Similarity Detection Approach Using Self-Organizing Maps and Similarity Measures

    Get PDF
    In the paper the word-level n-grams based approach is proposed to find similarity between texts. The approach is a combination of two separate and independent techniques: self-organizing map (SOM) and text similarity measures. SOM’s uniqueness is that the obtained results of data clustering, as well as dimensionality reduction, are presented in a visual form. The four measures have been evaluated: cosine, dice, extended Jaccard’s, and overlap. First of all, texts have to be converted to numerical expression. For that purpose, the text has been split into the word-level n-grams and after that, the bag of n-grams has been created. The n-grams’ frequencies are calculated and the frequency matrix of dataset is formed. Various filters are used to create a bag of n-grams: stemming algorithms, number and punctuation removers, stop words, etc. All experimental investigation has been made using a corpus of plagiarized short answers dataset.This article belongs to the Special Issue Advances in Deep Learnin

    From feature engineering and topics models to enhanced prediction rates in phishing detection

    Get PDF
    Phishing is a type of fraud attempt in which the attacker, usually by e-mail, pretends to be a trusted person or entity in order to obtain sensitive information from a target. Most recent phishing detection researches have focused on obtaining highly distinctive features from the metadata and text of these e-mails. The obtained attributes are then used to feed classification algorithms in order to determine whether they are phishing or legitimate messages. In this paper, it is proposed an approach based on machine learning to detect phishing e-mail attacks. The methods that compose this approach are performed through a feature engineering process based on natural language processing, lemmatization, topics modeling, improved learning techniques for resampling and cross-validation, and hyperparameters configuration. The first proposed method uses all the features obtained from the Document-Term Matrix (DTM) in the classification algorithms. The second one uses Latent Dirichlet Allocation (LDA) as a operation to deal with the problems of the “curse of dimensionality”, the sparsity, and the text context portion included in the obtained representation. The proposed approach reached marks with an F1-measure of 99.95% success rate using the XGBoost algorithm. It outperforms state-of-the-art phishing detection researches for an accredited data set, in applications based only on the body of the e-mails, without using other e-mail features such as its header, IP information or number of links in the text

    Phishing detection : methods based on natural language processing

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2020.Nas tentativas de phishing, o criminoso finge ser uma pessoa ou entidade confiável e, por meio dessa falsa representação, tenta obter informações confidenciais de um alvo. Um exemplo típico é aquele em que golpistas tentam passar por uma instituição conhecida, alegando a necessidade de atualização de um cadastro ou de uma ação imediata do lado do cliente e, para isso, são solicitados dados pessoais e financeiros. Uma variedade de recursos, como páginas da web falsas, instalação de código malicioso ou preenchimento de formulários, são empregados junto com o próprio e-mail para executar esse tipo de ação. Geralmente uma campanha de phishing começa com um e-mail. Portanto, a detecção desse tipo de e-mail é crítica. Uma vez que o phishing pretende parecer uma mensagem legítima, as técnicas de detecção baseadas apenas em regras de filtragem, como regras de listas e heurística, têm eficácia limitada, além de potencialmente poderem ser forjadas. Desta forma, através de processamento de texto, atributos podem ser extraídos do corpo e do cabeçalho de e-mails, por meio de técnicas que expliquem as relações de semelhança e significância entre as palavras presentes em um determinado e-mail, bem como em todo o conjunto de amostras de mensagens. A abordagem mais comum para este tipo de engenharia de recursos é baseada em Modelos de Espaço Vetorial (VSM), mas como o VSM derivada da Matriz de Documentos por Termos (DTM) tem tantas dimensões quanto o número de termos utilizado em um corpus, e dado o fato de que nem todos os termos estão presentes em cada um dos emails, a etapa de engenharia de recursos do processo de detecção de e-mails de phishing tem que lidar e resolver questões relacionadas à "Maldição da Dimensionalidade", à esparsidade e às informações que podem ser obtidas do contexto textual. Esta tese propõe uma abordagem que consiste em quatro métodos para detectar phishing. Eles usam técnicas combinadas para obter recursos mais representativos dos textos de e-mails que são utilizados como atributos de entrada para os algoritmos de classificação para detectar e-mails de phishing corretamente. Eles são baseadas em processamento de linguagem natural (NLP) e aprendizado de máquina (ML), com estratégias de engenharia de features que aumentam a precisão, recall e acurácia das previsões dos algoritmos adotados, e abordam os problemas relacionados à representação VSM/DTM. O método 1 usa todos os recursos obtidos da DTM nos algoritmos de classificação, enquanto os outros métodos usam diferentes estratégias de redução de dimensionalidade para lidar com as questões apontadas. O método 2 usa a seleção de recursos por meio das vii medidas de qui-quadrado e informação mútua para tratar esses problemas. O Método 3 implementa a extração de recursos por meio das técnicas de Análise de Componentes Prin- cipais (PCA), Análise Semântica Latente (LSA) e Alocação Latente de Dirichlet (LDA). Enquanto o Método 4 é baseado na incorporação de palavras, e suas representações são obtidas a partir das técnicas Word2Vec, Fasttext e Doc2Vec. Foram empregados três conjuntos de dados (Dataset 1 - o conjunto de dados principal, Dataset 2 e Dataset 3). Usando o Dataset 1, em seus respectivos melhores resultados, uma pontuação F1 de 99,74% foi alcançada pelo Método 1, enquanto os outros três métodos alcançaram uma medida notável de 100% em todas as medidas de utilidade utilizadas, ou seja até onde sabemos, o mais alto resultado em pesquisas de detecção de phishing para um conjunto de dados credenciado com base apenas no corpo dos e-mails. Os métodos/perspectivas que obtiveram 100% no Dataset 1 (perspectiva Qui-quadrado do Método 2 - usando cem features, perspectiva LSA do Método 3 - usando vinte e cinco features, perspectiva Word2Vec e perspectiva FastText do Método 4) foram avaliados em dois contextos diferentes. Considerando tanto o corpo do e-mail quanto o cabeçalho, utilizando o primeiro dataset adicional proposto (Dataset 2), onde, em sua melhor nota, foi obtido 99,854% F1 Score na perspectiva Word2Vec, superando o melhor resultado atual para este dataset. Utilizando apenas os corpos de e-mail, como feito para o Dataset 1, a avaliação com o Dataset 3 também se mostrou com os melhores resultados para este dataset. Todas as quatro perspectivas superam os resultados do estado da arte, com uma pontuação F1 de 98,43%, através da perspectiva FastText, sendo sua melhor nota. Portanto, para os dois conjuntos de dados adicionais, esses resultados são os mais elevados na pesquisa de detecção de phishing para esses datasets. Os resultados demonstrados não são apenas devido ao excelente desempenho dos algoritmos de classificação, mas também devido à combinação de técnicas proposta, composta de processos de engenharia de features, de técnicas de aprendizagem apri- moradas para reamostragem e validação cruzada, e da estimativa de configuração de hiperparâmetros. Assim, os métodos propostos, suas perspectivas e toda a sua estraté- gia demonstraram um desempenho relevante na detecção de phishing. Eles também se mostraram uma contribuição substancial para outras pesquisas de NLP que precisam lidar com os problemas da representação VSM/DTM, pois geram uma representação densa e de baixa dimensão para os textos avaliados.In phishing attempts, the attacker pretends to be a trusted person or entity and, through this false impersonation, tries to obtain sensitive information from a target. A typical example is one in which a scammer tries to pass off as a known institution, claiming the need to update a register or take immediate action from the client-side, and for this, personal and financial data are requested. A variety of features, such as fake web pages, the installation of malicious code, or form filling are employed along with the e-mail itself to perform this type of action. A phishing campaign usually starts with an e-mail. Therefore, the detection of this type of e-mail is critical. Since phishing aims to appear being a legitimate message, detection techniques based only on filtering rules, such as blacklisting and heuristics, have limited effectiveness, in addition to being potentially forged. Therefore, with the use of data-driven techniques, mainly those focused on text processing, features can be extracted from the e-mail body and header that explain the similarity and significance of the words in a specific e-mail, as well as for the entire set of message samples. The most common approach for this type of feature engineering is based on Vector Space Models (VSM). However, since VSMs derived from the Document- Term Matrix (DTM) have as many dimensions as the number of terms in used in a corpus, in addition to the fact that not all terms are present in each of the e-mails, the feature engineering step of the phishing e-mail detection process has to deal with and address issues related to the "Curse of Dimensionality"; the sparsity and the information that can be obtained from the context (how to improve it, and reveal its latent features). This thesis proposes an approach to detect phishing that consists of four methods. They use combined techniques to obtain more representative features from the e-mail texts that feed ML classification algorithms to correctly detect phishing e-mails. They are based on natural language processing (NLP) and machine learning (ML), with feature engineering strategies that increase the precision, recall, and accuracy of the predictions of the adopted algorithms and that address the VSM/DTM problems. Method 1 uses all the features obtained from the DTM in the classification algorithms, while the other methods use different dimensionality reduction strategies to deal with the posed issues. Method 2 uses feature selection through the Chi-Square and Mutual Information measures to address these problems. Method 3 implements feature extraction through the Principal Components Analysis (PCA), Latent Semantic Analysis (LSA), and Latent Dirichlet Allocation (LDA) techniques. Method 4 is based on word embedding, and its representations are obtained from the Word2Vec, Fasttext, and Doc2Vec techniques. ix Our approach was employed on three datasets (Dataset 1 - the main dataset, Dataset 2, and Dataset 3). All four proposed methods had excellent marks. Using the main proposed dataset (Dataset 1), on the respective best results of the four methods, a F1 Score of 99.74% was achieved by Method 1, whereas the other three methods attained a remarkable measure of 100% in all main utility measures which is, to the best of our knowledge, the highest result obtained in phishing detection research for an accredited dataset based only on the body of the e-mails. The methods/perspectives that obtained 100% in Dataset 1 (perspective Chi-Square of Method 2 - using one-hundred features, perspective LSA of Method 3 - using twenty-five features, perspectiveWord2Vec and perspective FastText of Method 4) were evaluated in two different contexts. Considering both the e-mail bodies and headers, using the first additional proposed dataset (Dataset 2), a 99.854% F1 Score was obtained using the perspective Word2Vec, which was its best mark, surpassing the current best result. Using just the e-mail bodies, as done for Dataset 1, the evaluation employing Dataset 3 also proved to reach the best marks for this data collection. All four perspectives outperformed the state-of-the-art results, with an F1 Score of 98.43%, through the FastText perspective, being its best mark. Therefore, for both additional datasets, these results, to the best of our knowledge, are the highest in phishing detection research for these accredited datasets. The results obtained by these measurements are not only due to the excellent perfor- mance of the classification algorithms, but also to the combined techniques of feature engineering proposed process such as text processing procedures (for instance, the lemma- tization step), improved learning techniques for re-sampling and cross-validation, and hyper-parameter configuration estimation. Thus, the proposed methods, their perspectives, and the complete plan of action demonstrated relevant performance when distinguishing between ham and phishing e-mails. The methods also proved to substantially contribute to this area of research and other natural language processing research that need to address or avoid problems related to VSM/DTM representation, since the methods generate a dense and low-dimension representation of the evaluated texts

    Neural Bag-of-Ngrams

    No full text
    Bag-of-ngrams (BoN) models are commonly used for representing text. One of the main drawbacks of traditional BoN is the ignorance of n-gram's semantics. In this paper, we introduce the concept of Neural Bag-of-ngrams (Neural-BoN), which replaces sparse one-hot n-gram representation in traditional BoN with dense and rich-semantic n-gram representations. We first propose context guided n-gram representation by adding n-grams to word embeddings model. However, the context guided learning strategy of word embeddings is likely to miss some semantics for text-level tasks. Text guided n-gram representation and label guided n-gram representation are proposed to capture more semantics like topic or sentiment tendencies. Neural-BoN with the latter two n-gram representations achieve state-of-the-art results on 4 document-level classification datasets and 6 semantic relatedness categories. They are also on par with some sophisticated DNNs on 3 sentence-level classification datasets. Similar to traditional BoN, Neural-BoN is efficient, robust and easy to implement. We expect it to be a strong baseline and be used in more real-world applications
    corecore