18 research outputs found
Automatic discovery of image families: Global vs. local features
Gathering a large collection of images has been made quite
easy by social and image sharing websites, e.g. flickr.com.
However, using such collections faces the problem that they
contain a large number of duplicates and highly similar images.
This work tackles the problem of how to automatically
organize image collections into sets of similar images,
called image families hereinafter. We thoroughly compare the
performance of two approaches to measure image similarity:
global descriptors vs. a set of local descriptors. We assess
the performance of these approaches as the problem scales up
to thousands of images and hundreds of families. We present
our results on a new dataset of CD/DVD game covers
Copy move Forgery Detection Approaches: A Survey
Copy-move forgery detection is one of the most popular image forgery technique in which a part of a digital image is copied and pasted to another part in the same image with the intension to make an object “disappear†from the image by covering it with a small block copied from another part of the same image. Hence, the main task of copy-move forgery detection is to detect image areas that are same or almost similar within an image. These method in general use two approaches namely key-point based and block based. This paper provides a review of copy move forgery detection on various techniques.Keywords: Copy move forgery, Lexicographical Sorting, Digital Image Forgery, Duplicated Region
Enhanced Approximated SURF Model For Object Recognition
ABSTRACT Computer vision applications like camera calibration, 3D reconstruction, and object recognition and image registration are becoming widely popular now a day. In this paper an enhanced model for speeded up robust features (SURF) is proposed by which the object recognition process will become three times faster than common SURF model The main idea is to use efficient data structures for both, the detector and the descriptor. The detection of interest regions is considerably speed-up by using an integral image for scale space computation. The descriptor which is based on orientation histograms is accelerated by the use of an integral orientation histogram. We present an analysis of the computational costs comparing both parts of our approach to the conventional method. Extensive experiments show a speed-up by a factor of eight while the matching and repeatability performance is decreased only slightly
An Evaluation of Popular Copy-Move Forgery Detection Approaches
A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper
appeared in IEEE Transaction on Information Forensics and Securit
Image Replica Detection based on Binary Support Vector Classifier
In this paper, we present a system for image replica detection. More specifically, the technique is based on the extraction of 162 features corresponding to texture, color and gray-level characteristics. These features are then weighted and statistically normalized. To improve training and performances, the features space dimensionality is reduced. Lastly, a decision function is generated to classify the test image as replica or non-replica of a given reference image. Experimental results show the effectiveness of the proposed system. Target applications include search for copyright infringement (e.g. variations of copyrighted images) and illicit content (e.g. pedophile images)
Adaptación de los algoritmos SIFT y LSH para la diferenciación de archivos de imágenes
El almacenamiento digital de información se ha vuelto un proceso cotidiano para
todo aquel que disponga de algún dispositivo electrónico. Al tratarse de un proceso
tan frecuente, es muy común que se almacenen grandes cantidades de datos/información, volví ´endose ardua su administración. Esto aplica a todos los tipos de datos
digitales.
El presente proyecto se enfoca en los problemas de almacenamiento de archivos
de imágenes, como la gran repetición de archivos, elaborando una solución que permita
aminorar el problema. El objetivo del proyecto es construir una herramienta que
facilite la búsqueda de archivos de imagen que contengan contenidos similares. Para
lograr el objetivo, se evaluaron herramientas que permitieran manipular la información
de los archivos de imagen de manera que se puedan obtener los datos necesarios
para realizar un proceso de comparación. Se decidió utilizar las herramientas SIFT
y LSH y se procedió a adecuarlas para su funcionamiento de acuerdo a los criterios
establecidos durante la investigación.
Finalmente, se pudo elaborar una solución que permite realizar la comparación de
un grupo de imágenes, mostrando porcentajes de similitud entre estas para así poder
saber que imágenes son similares entre sí.
En el primer capítulo del presente documento se desarrolla el problema a tratar
y se explican los términos que se utilizan a lo largo de todo el documento. En el siguiente
capítulo se encuentran los objetivos de la tesis, así como los resultados que
se pretende obtener y las herramientas que se utilizaron para la elaboración de la
solución. En los capítulos siguientes, se desarrollan uno por uno los objetivos alcanzados
y en el ´ ultimo capítulo se encuentran las conclusiones y comentarios sobre el
trabajo realizado.Tesi