89 research outputs found

    Finite-context Indexing of Restricted Output Space for NLP Models Facing Noisy Input

    Full text link
    NLP models excel on tasks with clean inputs, but are less accurate with noisy inputs. In particular, character-level noise such as human-written typos and adversarially-engineered realistic-looking misspellings often appears in text and can easily trip up NLP models. Prior solutions to address character-level noise often alter the content of the inputs (low fidelity), thus inadvertently lowering model accuracy on clean inputs. We proposed FiRo, an approach to boost NLP model performance on noisy inputs without sacrificing performance on clean inputs. FiRo sanitizes the input text while preserving its fidelity by inferring the noise-free form for each token in the input. FiRo uses finite-context aggregation to obtain contextual embeddings which is then used to find the noise-free form within a restricted output space. The output space is restricted to a small cluster of probable candidates in order to predict the noise-free tokens more accurately. Although the clusters are small, FiRo's effective vocabulary (union of all clusters) can be scaled up to better preserve the input content. Experimental results show NLP models that use FiRo outperforming baselines on six classification tasks and one sequence labeling task at various degrees of noise.Comment: Accepted at IJCNLP-AACL 202

    BERT Probe : A python package for probing attention based robustness evaluation of BERT models

    Get PDF
    Transformer models based on attention-based architectures have been significantly successful in establishing state-of-the-art results in natural language processing (NLP). However, recent work about adversarial robustness of attention-based models show that their robustness is susceptible to adversarial inputs causing spurious outputs thereby raising questions about trustworthiness of such models. In this paper, we present BERT Probe which is a python-based package for evaluating robustness to attention attribution based on character-level and word-level evasion attacks and empirically quantifying potential vulnerabilities for sequence classification tasks. Additionally, BERT Probe also provides two out-of-the-box defenses against character-level attention attribution-based evasion attacks

    Context-aware Adversarial Attack on Named Entity Recognition

    Full text link
    In recent years, large pre-trained language models (PLMs) have achieved remarkable performance on many natural language processing benchmarks. Despite their success, prior studies have shown that PLMs are vulnerable to attacks from adversarial examples. In this work, we focus on the named entity recognition task and study context-aware adversarial attack methods to examine the model's robustness. Specifically, we propose perturbing the most informative words for recognizing entities to create adversarial examples and investigate different candidate replacement methods to generate natural and plausible adversarial examples. Experiments and analyses show that our methods are more effective in deceiving the model into making wrong predictions than strong baselines

    Another Dead End for Morphological Tags? Perturbed Inputs and Parsing

    Full text link
    The usefulness of part-of-speech tags for parsing has been heavily questioned due to the success of word-contextualized parsers. Yet, most studies are limited to coarse-grained tags and high quality written content; while we know little about their influence when it comes to models in production that face lexical errors. We expand these setups and design an adversarial attack to verify if the use of morphological information by parsers: (i) contributes to error propagation or (ii) if on the other hand it can play a role to correct mistakes that word-only neural parsers make. The results on 14 diverse UD treebanks show that under such attacks, for transition- and graph-based models their use contributes to degrade the performance even faster, while for the (lower-performing) sequence labeling parsers they are helpful. We also show that if morphological tags were utopically robust against lexical perturbations, they would be able to correct parsing mistakes.Comment: Accepted at Findings of ACL 202
    • …
    corecore