111,312 research outputs found
ARCHANGEL: Tamper-proofing Video Archives using Temporal Content Hashes on the Blockchain
We present ARCHANGEL; a novel distributed ledger based system for assuring
the long-term integrity of digital video archives. First, we describe a novel
deep network architecture for computing compact temporal content hashes (TCHs)
from audio-visual streams with durations of minutes or hours. Our TCHs are
sensitive to accidental or malicious content modification (tampering) but
invariant to the codec used to encode the video. This is necessary due to the
curatorial requirement for archives to format shift video over time to ensure
future accessibility. Second, we describe how the TCHs (and the models used to
derive them) are secured via a proof-of-authority blockchain distributed across
multiple independent archives. We report on the efficacy of ARCHANGEL within
the context of a trial deployment in which the national government archives of
the United Kingdom, Estonia and Norway participated.Comment: Accepted to CVPR Blockchain Workshop 201
Quality Classified Image Analysis with Application to Face Detection and Recognition
Motion blur, out of focus, insufficient spatial resolution, lossy compression
and many other factors can all cause an image to have poor quality. However,
image quality is a largely ignored issue in traditional pattern recognition
literature. In this paper, we use face detection and recognition as case
studies to show that image quality is an essential factor which will affect the
performances of traditional algorithms. We demonstrated that it is not the
image quality itself that is the most important, but rather the quality of the
images in the training set should have similar quality as those in the testing
set. To handle real-world application scenarios where images with different
kinds and severities of degradation can be presented to the system, we have
developed a quality classified image analysis framework to deal with images of
mixed qualities adaptively. We use deep neural networks first to classify
images based on their quality classes and then design a separate face detector
and recognizer for images in each quality class. We will present experimental
results to show that our quality classified framework can accurately classify
images based on the type and severity of image degradations and can
significantly boost the performances of state-of-the-art face detector and
recognizer in dealing with image datasets containing mixed quality images.Comment: 6 page
Test Set Diameter: Quantifying the Diversity of Sets of Test Cases
A common and natural intuition among software testers is that test cases need
to differ if a software system is to be tested properly and its quality
ensured. Consequently, much research has gone into formulating distance
measures for how test cases, their inputs and/or their outputs differ. However,
common to these proposals is that they are data type specific and/or calculate
the diversity only between pairs of test inputs, traces or outputs.
We propose a new metric to measure the diversity of sets of tests: the test
set diameter (TSDm). It extends our earlier, pairwise test diversity metrics
based on recent advances in information theory regarding the calculation of the
normalized compression distance (NCD) for multisets. An advantage is that TSDm
can be applied regardless of data type and on any test-related information, not
only the test inputs. A downside is the increased computational time compared
to competing approaches.
Our experiments on four different systems show that the test set diameter can
help select test sets with higher structural and fault coverage than random
selection even when only applied to test inputs. This can enable early test
design and selection, prior to even having a software system to test, and
complement other types of test automation and analysis. We argue that this
quantification of test set diversity creates a number of opportunities to
better understand software quality and provides practical ways to increase it.Comment: In submissio
An Evaluation of Popular Copy-Move Forgery Detection Approaches
A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper
appeared in IEEE Transaction on Information Forensics and Securit
Learning to detect dysarthria from raw speech
Speech classifiers of paralinguistic traits traditionally learn from diverse
hand-crafted low-level features, by selecting the relevant information for the
task at hand. We explore an alternative to this selection, by learning jointly
the classifier, and the feature extraction. Recent work on speech recognition
has shown improved performance over speech features by learning from the
waveform. We extend this approach to paralinguistic classification and propose
a neural network that can learn a filterbank, a normalization factor and a
compression power from the raw speech, jointly with the rest of the
architecture. We apply this model to dysarthria detection from sentence-level
audio recordings. Starting from a strong attention-based baseline on which
mel-filterbanks outperform standard low-level descriptors, we show that
learning the filters or the normalization and compression improves over fixed
features by 10% absolute accuracy. We also observe a gain over OpenSmile
features by learning jointly the feature extraction, the normalization, and the
compression factor with the architecture. This constitutes a first attempt at
learning jointly all these operations from raw audio for a speech
classification task.Comment: 5 pages, 3 figures, submitted to ICASS
Aligned and Non-Aligned Double JPEG Detection Using Convolutional Neural Networks
Due to the wide diffusion of JPEG coding standard, the image forensic
community has devoted significant attention to the development of double JPEG
(DJPEG) compression detectors through the years. The ability of detecting
whether an image has been compressed twice provides paramount information
toward image authenticity assessment. Given the trend recently gained by
convolutional neural networks (CNN) in many computer vision tasks, in this
paper we propose to use CNNs for aligned and non-aligned double JPEG
compression detection. In particular, we explore the capability of CNNs to
capture DJPEG artifacts directly from images. Results show that the proposed
CNN-based detectors achieve good performance even with small size images (i.e.,
64x64), outperforming state-of-the-art solutions, especially in the non-aligned
case. Besides, good results are also achieved in the commonly-recognized
challenging case in which the first quality factor is larger than the second
one.Comment: Submitted to Journal of Visual Communication and Image Representation
(first submission: March 20, 2017; second submission: August 2, 2017
- …