15 research outputs found

    Recasting Residual-based Local Descriptors as Convolutional Neural Networks: an Application to Image Forgery Detection

    Full text link
    Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector

    Height Measurement System Based on Edge Detection Technique and Analysis of Digital Image Processing

    Get PDF
    Height measurement is important in the field of health, athletics, and others. During this process the measurement height is done manually by using the tools in the form of tape measure or ruler. This takes quite a long time. With the weaknesses of such height measurements, then on this research designed a height measurement system based on Digital Image Processing Techniques. The parameters used for this research is the distance of the camera, the height of the camera, the camera angle, the color of the shirt, and the position of the object. Testing is done by performing processing and digital image analysis. By using these techniques, it only takes 35 seconds to process measurement and results. In addition to the test results can be stored in the system database, making it easier for data archiving. Of the overall testing with parameters of the camera distance, the height of the camera, the camera angle, the color of the shirt, and the position of the object, resulting in a measurement of the height of a more objective and accurate with the level of accuracy of 99.5%

    Multi-type Noise Removal in Lead Frame Image Using Enhanced Hybrid Median Filter

    Get PDF
    Image filtering technique plays a very important role in digital image processing. It is one of the major steps in image enhancement and restoration. This filtering technique can remove noise and preserve the details of the image for feature extraction processes. However, filtering process can still be considered as a huge challenge for image filtering technique. Common noises in the image such as Salt & Pepper, Gaussian, Speckle, and Poisson Noise are still posing problems in filtering process where the quality and the originality of the images can be degraded and disturbed. Meanwhile, a single filtering technique is usually fit to deal with only certain specific noise. This paper presents an enhanced Hybrid Median Filter (H6F) technique to improve image filtering process. The technique involves 3x3 sub-mask and determination of new pixel value from the median value of the three steps which are the median calculation of ‘+’-neighbours, median calculation of all sub-masks and selection of centre pixel value. The H6F technique has been computed on lead frame inspection system. The results have shown that the technique has been able to remove multi-type of noises efficiently and produce exceptionally low Mean-Square Error (MSE) while consuming the acceptable amount of execution time when compared to other filtering techniques

    A Full-Image Full-Resolution End-to-End-Trainable CNN Framework for Image Forgery Detection

    Full text link
    Due to limited computational and memory resources, current deep learning models accept only rather small images in input, calling for preliminary image resizing. This is not a problem for high-level vision problems, where discriminative features are barely affected by resizing. On the contrary, in image forensics, resizing tends to destroy precious high-frequency details, impacting heavily on performance. One can avoid resizing by means of patch-wise processing, at the cost of renouncing whole-image analysis. In this work, we propose a CNN-based image forgery detection framework which makes decisions based on full-resolution information gathered from the whole image. Thanks to gradient checkpointing, the framework is trainable end-to-end with limited memory resources and weak (image-level) supervision, allowing for the joint optimization of all parameters. Experiments on widespread image forensics datasets prove the good performance of the proposed approach, which largely outperforms all baselines and all reference methods.Comment: 13 pages, 12 figures, journa

    Digital forensic techniques for the reverse engineering of image acquisition chains

    Get PDF
    In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces

    Security of Forensic Techniques for Digital Images

    Get PDF
    Digital images are used everywhere in modern life and mostly replace traditional photographs. At the same time, due to the popularity of image editing tools, digital images can be altered, often leaving no obvious evidence. Thus, evaluating image authenticity is indispensable. Image forensic techniques are used to detect forgeries in digital images in the absence of embedded watermarks or signatures. Nevertheless, some legitimate or illegitimate image post-processing operations can affect the quality of the forensic results. Therefore, the reliability of forensic techniques needs to be investigated. The reliability is understood in this case as the robustness against image post-processing operations or the security against deliberated attacks. In this work, we first develop a general test framework, which is used to assess the effectiveness and security of image forensic techniques under common conditions. We design different evaluation metrics, image datasets, and several different image post-processing operations as a part of the framework. Secondly, we build several image forensic tools based on selected algorithms for detecting copy-move forgeries, re-sampling artifacts, and manipulations in JPEG images. The effectiveness and robustness of the tools are evaluated by using the developed test framework. Thirdly, for each selected technique, we develop several targeted attacks. The aim of targeted attacks against a forensic technique is to remove forensic evidence present in forged images. Subsequently, by using the test framework and the targeted attacks, we can thoroughly evaluate the security of the forensic technique. We show that image forensic techniques are often sensitive and can be defeated when their algorithms are publicly known. Finally, we develop new forensic techniques which achieve higher security in comparison with state-of-the-art forensic techniques
    corecore