45 research outputs found

    Spatio-temporal rich model-based video steganalysis on cross sections of motion vector planes.

    Get PDF
    A rich model-based motion vector (MV) steganalysis benefiting from both temporal and spatial correlations of MVs is proposed in this paper. The proposed steganalysis method has a substantially superior detection accuracy than the previous methods, even the targeted ones. The improvement in detection accuracy lies in several novel approaches introduced in this paper. First, it is shown that there is a strong correlation, not only spatially but also temporally, among neighbouring MVs for longer distances. Therefore, temporal MV dependency alongside the spatial dependency is utilized for rigorous MV steganalysis. Second, unlike the filters previously used, which were heuristically designed against a specific MV steganography, a diverse set of many filters, which can capture aberrations introduced by various MV steganography methods is used. The variety and also the number of the filter kernels are substantially more than that of used in the previous ones. Besides that, filters up to fifth order are employed whereas the previous methods use at most second order filters. As a result of these, the proposed system captures various decorrelations in a wide spatio-temporal range and provides a better cover model. The proposed method is tested against the most prominent MV steganalysis and steganography methods. To the best knowledge of the authors, the experiments section has the most comprehensive tests in MV steganalysis field, including five stego and seven steganalysis methods. Test results show that the proposed method yields around 20% detection accuracy increase in low payloads and 5% in higher payloads.Engineering and Physical Sciences Research Council through the CSIT 2 Project under Grant EP/N508664/1

    A Comprehensive Review of Video Steganalysis

    Get PDF
    Steganography is the art of secret communication and steganalysis is the art of detecting the hidden messages embedded in digital media covers. One of the covers that is gaining interest in the field is video. Presently, the global IP video traffic forms the major part of all consumer Internet traffic. It is also gaining attention in the field of digital forensics and homeland security in which threats of covert communications hold serious consequences. Thus, steganography technicians will prefer video to other types of covers like audio files, still images or texts. Moreover, video steganography will be of more interest because it provides more concealing capacity. Contrariwise, investigation in video steganalysis methods does not seem to follow the momentum even if law enforcement agencies and governments around the world support and encourage investigation in this field. In this paper, we review the most important methods used so far in video steganalysis and sketch the future trends. To the best of our knowledge this is the most comprehensive review of video steganalysis produced so far

    Steganalysis of 3D objects using statistics of local feature sets

    Get PDF
    3D steganalysis aims to identify subtle invisible changes produced in graphical objects through digital watermarking or steganography. Sets of statistical representations of 3D features, extracted from both cover and stego 3D mesh objects, are used as inputs into machine learning classifiers in order to decide whether any information was hidden in the given graphical object. The features proposed in this paper include those representing the local object curvature, vertex normals, the local geometry representation in the spherical coordinate system. The effectiveness of these features is tested in various combinations with other features used for 3D steganalysis. The relevance of each feature for 3D steganalysis is assessed using the Pearson correlation coefficient. Six different 3D watermarking and steganographic methods are used for creating the stego-objects used in the evaluation study

    Employing optical flow on convolutional recurrent structures for deepfake detection

    Get PDF
    Deepfakes, or artificially generated audiovisual renderings, can be used to defame a public figure or influence public opinion. With the recent discovery of generative adversarial networks, an attacker using a normal desktop computer fitted with an off-the-shelf graphics processing unit can make renditions realistic enough to easily fool a human observer. Detecting deepfakes is thus becoming vital for reporters, social networks, and the general public. Preliminary research introduced simple, yet surprisingly efficient digital forensic methods for visual deepfake detection. These methods combined convolutional latent representations with bidirectional recurrent structures and entropy-based cost functions. The latent representations for the video are carefully chosen to extract semantically rich information from the recordings. By feeding these into a recurrent framework, we were able to sequentially detect both spatial and temporal signatures of deepfake renditions. The entropy-based cost functions work well in isolation as well as in context with traditional cost functions. However, re-enactment based forgery is getting harder to detect with newer generation techniques ameliorating on temporal ironing and background stability. As these generative models involve the use of a learnable flow mapping network from the driving video to the target face, we hypothesized that the inclusion of edge maps in addition to dense flow maps near the facial region provides the model with finer details to make an informed classification. Methods were demonstrated on the FaceForensics++, Celeb-DF, and DFDC-mini (custom-made) video datasets, achieving new benchmarks in all categories. We also perform extensive studies to evaluate on adversaries and demonstrate generalization to new domains, consequently gaining further insight into the effectiveness of the new architectures

    System steganalysis with automatic fingerprint extraction

    Get PDF
    This paper tries to tackle the modern challenge of practical steganalysis over large data by presenting a novel approach whose aim is to perform with perfect accuracy and in a completely automatic manner. The objective is to detect changes introduced by the steganographic process in those data objects, including signatures related to the tools being used. Our approach achieves this by first extracting reliable regularities by analyzing pairs of modified and unmodified data objects; then, combines these findings by creating general patterns present on data used for training. Finally, we construct a Naive Bayes model that is used to perform classification, and operates on attributes extracted using the aforementioned patterns. This technique has been be applied for different steganographic tools that operate in media files of several types. We are able to replicate or improve on a number or previously published results, but more importantly, we in addition present new steganalytic findings over a number of popular tools that had no previous known attacks

    Visual slam in dynamic environments

    Get PDF
    El problema de localización y construcción visual simultánea de mapas (visual SLAM por sus siglas en inglés Simultaneous Localization and Mapping) consiste en localizar una cámara en un mapa que se construye de manera online. Esta tecnología permite la localización de robots en entornos desconocidos y la creación de un mapa de la zona con los sensores que lleva incorporados, es decir, sin contar con ninguna infraestructura externa. A diferencia de los enfoques de odometría en los cuales el movimiento incremental es integrado en el tiempo, un mapa permite que el sensor se localice continuamente en el mismo entorno sin acumular deriva.Asumir que la escena observada es estática es común en los algoritmos de SLAM visual. Aunque la suposición estática es válida para algunas aplicaciones, limita su utilidad en escenas concurridas del mundo real para la conducción autónoma, los robots de servicio o realidad aumentada y virtual entre otros. La detección y el estudio de objetos dinámicos es un requisito para estimar con precisión la posición del sensor y construir mapas estables, útiles para aplicaciones robóticas que operan a largo plazo.Las contribuciones principales de esta tesis son tres: 1. Somos capaces de detectar objetos dinámicos con la ayuda del uso de la segmentación semántica proveniente del aprendizaje profundo y el uso de enfoques de geometría multivisión. Esto nos permite lograr una precisión en la estimación de la trayectoria de la cámara en escenas altamente dinámicas comparable a la que se logra en entornos estáticos, así como construir mapas en 3D que contienen sólo la estructura del entorno estático y estable. 2. Logramos alucinar con imágenes realistas la estructura estática de la escena detrás de los objetos dinámicos. Esto nos permite ofrecer mapas completos con una representación plausible de la escena sin discontinuidades o vacíos ocasionados por las oclusiones de los objetos dinámicos. El reconocimiento visual de lugares también se ve impulsado por estos avances en el procesamiento de imágenes. 3. Desarrollamos un marco conjunto tanto para resolver el problema de SLAM como el seguimiento de múltiples objetos con el fin de obtener un mapa espacio-temporal con información de la trayectoria del sensor y de los alrededores. La comprensión de los objetos dinámicos circundantes es de crucial importancia para los nuevos requisitos de las aplicaciones emergentes de realidad aumentada/virtual o de la navegación autónoma. Estas tres contribuciones hacen avanzar el estado del arte en SLAM visual. Como un producto secundario de nuestra investigación y para el beneficio de la comunidad científica, hemos liberado el código que implementa las soluciones propuestas.<br /
    corecore