2,738 research outputs found
Analysis of adversarial attacks against CNN-based image forgery detectors
With the ubiquitous diffusion of social networks, images are becoming a
dominant and powerful communication channel. Not surprisingly, they are also
increasingly subject to manipulations aimed at distorting information and
spreading fake news. In recent years, the scientific community has devoted
major efforts to contrast this menace, and many image forgery detectors have
been proposed. Currently, due to the success of deep learning in many
multimedia processing tasks, there is high interest towards CNN-based
detectors, and early results are already very promising. Recent studies in
computer vision, however, have shown CNNs to be highly vulnerable to
adversarial attacks, small perturbations of the input data which drive the
network towards erroneous classification. In this paper we analyze the
vulnerability of CNN-based image forensics methods to adversarial attacks,
considering several detectors and several types of attack, and testing
performance on a wide range of common manipulations, both easily and hardly
detectable
k-Nearest Neighbour Classifiers: 2nd Edition (with Python examples)
Perhaps the most straightforward classifier in the arsenal or machine
learning techniques is the Nearest Neighbour Classifier -- classification is
achieved by identifying the nearest neighbours to a query example and using
those neighbours to determine the class of the query. This approach to
classification is of particular importance because issues of poor run-time
performance is not such a problem these days with the computational power that
is available. This paper presents an overview of techniques for Nearest
Neighbour classification focusing on; mechanisms for assessing similarity
(distance), computational issues in identifying nearest neighbours and
mechanisms for reducing the dimension of the data.
This paper is the second edition of a paper previously published as a
technical report. Sections on similarity measures for time-series, retrieval
speed-up and intrinsic dimensionality have been added. An Appendix is included
providing access to Python code for the key methods.Comment: 22 pages, 15 figures: An updated edition of an older tutorial on kN
Spam Filter Improvement Through Measurement
This work supports the thesis that sound quantitative evaluation for
spam filters leads to substantial improvement in the classification
of email. To this end, new laboratory testing methods and datasets
are introduced, and evidence is presented that their adoption at Text
REtrieval Conference (TREC)and elsewhere has led to an improvement in state of the art
spam filtering. While many of these improvements have been discovered
by others, the best-performing method known at this time -- spam filter
fusion -- was demonstrated by the author.
This work describes four principal dimensions of spam filter evaluation
methodology and spam filter improvement. An initial study investigates
the application of twelve open-source filter configurations in a laboratory
environment, using a stream of 50,000 messages captured from a single
recipient over eight months. The study measures the impact of user
feedback and on-line learning on filter performance using methodology
and measures which were released to the research community as the
TREC Spam Filter Evaluation Toolkit.
The toolkit was used as the basis of the TREC Spam Track, which the
author co-founded with Cormack. The Spam Track, in addition to evaluating
a new application (email spam), addressed the issue of testing systems
on both private and public data. While streams of private messages
are most realistic, they are not easy to come by and cannot be shared
with the research community as archival benchmarks. Using the toolkit,
participant filters were evaluated on both, and the differences found
not to substantially confound evaluation; as a result, public corpora
were validated as research tools. Over the course of TREC and similar
evaluation efforts, a dozen or more archival benchmarks --
some private and some public -- have become available.
The toolkit and methodology have spawned improvements in the state
of the art every year since its deployment in 2005. In 2005, 2006,
and 2007, the spam track yielded new best-performing systems based
on sequential compression models, orthogonal sparse bigram features,
logistic regression and support vector machines. Using the TREC participant
filters, we develop and demonstrate methods for on-line filter fusion
that outperform all other reported on-line personal spam filters
BlogForever D2.4: Weblog spider prototype and associated methodology
The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype
Recasting Residual-based Local Descriptors as Convolutional Neural Networks: an Application to Image Forgery Detection
Local descriptors based on the image noise residual have proven extremely
effective for a number of forensic applications, like forgery detection and
localization. Nonetheless, motivated by promising results in computer vision,
the focus of the research community is now shifting on deep learning. In this
paper we show that a class of residual-based descriptors can be actually
regarded as a simple constrained convolutional neural network (CNN). Then, by
relaxing the constraints, and fine-tuning the net on a relatively small
training set, we obtain a significant performance improvement with respect to
the conventional detector
- …