1,154 research outputs found

    A Modified Fourier-Mellin Approach for Source Device Identification on Stabilized Videos

    Full text link
    To decide whether a digital video has been captured by a given device, multimedia forensic tools usually exploit characteristic noise traces left by the camera sensor on the acquired frames. This analysis requires that the noise pattern characterizing the camera and the noise pattern extracted from video frames under analysis are geometrically aligned. However, in many practical scenarios this does not occur, thus a re-alignment or synchronization has to be performed. Current solutions often require time consuming search of the realignment transformation parameters. In this paper, we propose to overcome this limitation by searching scaling and rotation parameters in the frequency domain. The proposed algorithm tested on real videos from a well-known state-of-the-art dataset shows promising results

    Automatic aerial target detection and tracking system in airborne FLIR images based on efficient target trajectory filtering

    Get PDF
    Common strategies for detection and tracking of aerial moving targets in airborne Forward-Looking Infrared (FLIR) images offer accurate results in images composed by a non-textured sky. However, when cloud and earth regions appear in the image sequence, those strategies result in an over-detection that increases very significantly the false alarm rate. Besides, the airborne camera induces a global motion in the image sequence that complicates even more detection and tracking tasks. In this work, an automatic detection and tracking system with an innovative and efficient target trajectory filtering is presented. It robustly compensates the global motion to accurately detect and track potential aerial targets. Their trajectories are analyzed by a curve fitting technique to reliably validate real targets. This strategy allows to filter false targets with stationary or erratic trajectories. The proposed system makes special emphasis in the use of low complexity video analysis techniques to achieve real-time operation. Experimental results using real FLIR sequences show a dramatic reduction of the false alarm rate, while maintaining the detection rate

    Real-time low-complexity digital video stabilization in the compressed domain

    Get PDF

    Analyzing Digital Image by Deep Learning for Melanoma Diagnosis

    Get PDF
    Image classi cation is an important task in many medical applications, in order to achieve an adequate diagnostic of di erent le- sions. Melanoma is a frequent kind of skin cancer, which most of them can be detected by visual exploration. Heterogeneity and database size are the most important di culties to overcome in order to obtain a good classi cation performance. In this work, a deep learning based method for accurate classi cation of wound regions is proposed. Raw images are fed into a Convolutional Neural Network (CNN) producing a probability of being a melanoma or a non-melanoma. Alexnet and GoogLeNet were used due to their well-known e ectiveness. Moreover, data augmentation was used to increase the number of input images. Experiments show that the compared models can achieve high performance in terms of mean ac- curacy with very few data and without any preprocessing.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    Video alignment to a common reference

    Get PDF
    2015 Spring.Includes bibliographical references.Handheld videos often include unintentional motion (jitter) and intentional motion (pan and/or zoom). Human viewers prefer to see jitter removed, creating a smoothly moving camera. For video analysis, in contrast, aligning to a fixed stable background is sometimes preferable. This paper presents an algorithm that removes both forms of motion using a novel and efficient way of tracking background points while ignoring moving foreground points. The approach is related to image mosaicing, but the result is a video rather than an enlarged still image. It is also related to multiple object tracking approaches, but simpler since moving objects need not be explicitly tracked. The algorithm presented takes as input a video and returns one or several stabilized videos. Videos are broken into parts when the algorithm detects background change and it becomes necessary to fix upon a new background. We present two techniques in this thesis. One technique stabilizes the video with respect to the first available frame. Another technique stabilizes the videos with respect to a best frame. Our approach assumes the person holding the camera is standing in one place and that objects in motion do not dominate the image. Our algorithm performs better than previously published approaches when compared on 1,401 handheld videos from the recently released Point-and-Shoot Face Recognition Challenge (PASC)
    corecore