42 research outputs found

    Detection of classifier inconsistencies in image steganalysis

    No full text
    In this paper, a methodology to detect inconsistencies in classification-based image steganalysis is presented. The proposed approach uses two classifiers: the usual one, trained with a set formed by cover and stego images, and a second classifier trained with the set obtained after embedding additional random messages into the original training set. When the decisions of these two classifiers are not consistent, we know that the prediction is not reliable. The number of inconsistencies in the predictions of a testing set may indicate that the classifier is not performing correctly in the testing scenario. This occurs, for example, in case of cover source mismatch, or when we are trying to detect a steganographic method that the classifier is no capable of modelling accurately. We also show how the number of inconsistencies can be used to predict the reliability of the classifier (classification errors)

    Simulating suboptimal steganographic embedding

    No full text
    Researchers who wish to benchmark the detectability of steganographic distortion functions typically simulate stego objects. However, the difference (coding loss) between simulated stego objects, and real stego objects is significant, and dependent on multiple factors. In this paper, we first identify some factors affecting the coding loss, then propose a method to estimate and correct for coding loss by sampling a few covers and messages. This allows us to simulate suboptimally-coded stego objects which are more accurate representations of real stego objects. We test our results against real embeddings, and naive PLS simulation, showing our simulated stego objects are closer to real embeddings in terms of both distortion and detectability. This is the case even when only a single image and message as used to estimate the loss
    corecore