Chalmers tekniska högskola / Institutionen för fysik
Abstract
Large Language Models (LLMs) are increasingly employed to anonymize texts containing
Personal Identifiable Information (PII), often relying on Named Entity Recognition
(NER) to identify and remove sensitive data. This thesis explores the privacy
risks associated with such text masking models by evaluating their vulnerability to
Membership Inference Attacks (MIAs) and extraction attacks. MIAs are attempting
to identify whether or not a data point was part of the training dataset, knowledge
of the membership can in certain scenarios be a breach of privacy. Two state-of-theart
MIAs have been used to conduct attacks on text masking models. This study
also proposes a framework based on multi-armed bandits for performing extraction
attacks and evaluates two different strategies within this framework. The results
from the MIAs indicate that there is some risk of revealing information regarding
the training data. The extraction attacks did not yield great results in terms of
performance but indicate that the concept could possibly be useful if developed
further
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.