141 research outputs found
Semantic-Preserving Linguistic Steganography by Pivot Translation and Semantic-Aware Bins Coding
Linguistic steganography (LS) aims to embed secret information into a highly
encoded text for covert communication. It can be roughly divided to two main
categories, i.e., modification based LS (MLS) and generation based LS (GLS).
Unlike MLS that hides secret data by slightly modifying a given text without
impairing the meaning of the text, GLS uses a trained language model to
directly generate a text carrying secret data. A common disadvantage for MLS
methods is that the embedding payload is very low, whose return is well
preserving the semantic quality of the text. In contrast, GLS allows the data
hider to embed a high payload, which has to pay the high price of
uncontrollable semantics. In this paper, we propose a novel LS method to modify
a given text by pivoting it between two different languages and embed secret
data by applying a GLS-like information encoding strategy. Our purpose is to
alter the expression of the given text, enabling a high payload to be embedded
while keeping the semantic information unchanged. Experimental results have
shown that the proposed work not only achieves a high embedding payload, but
also shows superior performance in maintaining the semantic consistency and
resisting linguistic steganalysis
Generating Steganographic Text with LSTMs
Motivated by concerns for user privacy, we design a steganographic system
("stegosystem") that enables two users to exchange encrypted messages without
an adversary detecting that such an exchange is taking place. We propose a new
linguistic stegosystem based on a Long Short-Term Memory (LSTM) neural network.
We demonstrate our approach on the Twitter and Enron email datasets and show
that it yields high-quality steganographic text while significantly improving
capacity (encrypted bits per word) relative to the state-of-the-art.Comment: ACL 2017 Student Research Worksho
- …