9 research outputs found

    Theoretical model of the FLD ensemble classifier based on hypothesis testing theory

    Get PDF
    International audienceThe FLD ensemble classifier is a widely used machine learning tool for steganalysis of digital media due to its efficiency when working with high dimensional feature sets. This paper explains how this classifier can be formulated within the framework of optimal detection by using an accurate statistical model of base learners' projections and the hypothesis testing theory. A substantial advantage of this formulation is the ability to theoretically establish the test properties, including the probability of false alarm and the test power, and the flexibility to use other criteria of optimality than the conventional total probability of error. Numerical results on real images show the sharpness of the theoretically established results and the relevance of the proposed methodology

    Is ensemble classifier needed for steganalysis in high-dimensional feature spaces?

    Get PDF
    International audienceThe ensemble classifier, based on Fisher Linear Discriminant base learners, was introduced specifically for steganalysis of digital media, which currently uses high-dimensional feature spaces. Presently it is probably the most used method to design supervised classifier for steganalysis of digital images because of its good detection accuracy and small computational cost. It has been assumed by the community that the classifier implements a non-linear boundary through pooling binary decision of individual classifiers within the ensemble. This paper challenges this assumption by showing that linear classifier obtained by various regularizations of the FLD can perform equally well as the ensemble. Moreover it demonstrates that using state of the art solvers linear classifiers can be trained more efficiently and offer certain potential advantages over the original ensemble leading to much lower computational complexity than the ensemble classifier. All claims are supported experimentally on a wide spectrum of stego schemes operating in both the spatial and JPEG domains with a multitude of rich steganalysis feature sets

    Advances in Syndrome Coding based on Stochastic and Deterministic Matrices for Steganography

    Get PDF
    Steganographie ist die Kunst der vertraulichen Kommunikation. Anders als in der Kryptographie, wo der Austausch vertraulicher Daten für Dritte offensichtlich ist, werden die vertraulichen Daten in einem steganographischen System in andere, unauffällige Coverdaten (z.B. Bilder) eingebettet und so an den Empfänger übertragen. Ziel eines steganographischen Algorithmus ist es, die Coverdaten nur geringfügig zu ändern, um deren statistische Merkmale zu erhalten, und möglichst in unauffälligen Teilen des Covers einzubetten. Um dieses Ziel zu erreichen, werden verschiedene Ansätze der so genannten minimum-embedding-impact Steganographie basierend auf Syndromkodierung vorgestellt. Es wird dabei zwischen Ansätzen basierend auf stochastischen und auf deterministischen Matrizen unterschieden. Anschließend werden die Algorithmen bewertet, um Vorteile der Anwendung von Syndromkodierung herauszustellen

    Natural Image Statistics for Digital Image Forensics

    Get PDF
    We describe a set of natural image statistics that are built upon two multi-scale image decompositions, the quadrature mirror filter pyramid decomposition and the local angular harmonic decomposition. These image statistics consist of first- and higher-order statistics that capture certain statistical regularities of natural images. We propose to apply these image statistics, together with classification techniques, to three problems in digital image forensics: (1) differentiating photographic images from computer-generated photorealistic images, (2) generic steganalysis; (3) rebroadcast image detection. We also apply these image statistics to the traditional art authentication for forgery detection and identification of artists in an art work. For each application we show the effectiveness of these image statistics and analyze their sensitivity and robustness

    The role of side information in steganography

    Full text link
    Das Ziel digitaler Steganographie ist es, eine geheime Kommunikation in digitalen Medien zu verstecken. Der übliche Ansatz ist es, die Nachricht in einem empirischen Trägermedium zu verstecken. In dieser Arbeit definieren wir den Begriff der Steganographischen Seiteninformation (SSI). Diese Definition umfasst alle wichtigen Eigenschaften von SSI. Wir begründen die Definition informationstheoretisch und erklären den Einsatz von SSI. Alle neueren steganographischen Algorithmen nutzen SSI um die Nachricht einzubetten. Wir entwickeln einen Angriff auf adaptive Steganographie und zeigen anhand von weit verbreiteten SSI-Varianten, dass unser Angriff funktioniert. Wir folgern, dass adaptive Steganographie spieltheoretisch beschrieben werden muss. Wir entwickeln ein spieltheoretisches Modell für solch ein System und berechnen die spieltheoretisch optimalen Strategien. Wir schlussfolgern, dass ein Steganograph diesen Strategien folgen sollte. Zudem entwickeln wir eine neue spieltheoretisch optimale Strategie zur Einbettung, die sogenannten Ausgleichseinbettungsstrategien.The  goal of digital steganography is to hide a secret communication in digital media. The common approach in steganography is to hide the secret messages in empirical cover objects. We are the first to define Steganographic Side Information (SSI). Our definition of SSI captures all relevant properties of SSI. We explain the common usage of SSI. All recent steganographic schemes use SSI to identify suitable areas fot the embedding change. We develop a targeted attack on four widely used variants of SSI, and show that our attack detects them almost perfectly. We argue that the steganographic competition must be framed with means of game theory. We present a game-theoretical framework that captures all relevant properties of such a steganographic system. We instantiate the framework with five different models and solve each of these models for game-theoretically optimal strategies. Inspired by our solutions, we give a new paradigm for secure adaptive steganography, the so-called equalizer embedding strategies

    System Steganalysis: Implementation Vulnerabilities and Side-Channel Attacks Against Digital Steganography Systems

    Get PDF
    Steganography is the process of hiding information in plain sight, it is a technology that can be used to hide data and facilitate secret communications. Steganography is commonly seen in the digital domain where the pervasive nature of media content (image, audio, video) provides an ideal avenue for hiding secret information. In recent years, video steganography has shown to be a highly suitable alternative to image and audio steganography due to its potential advantages (capacity, flexibility, popularity). An increased interest towards research in video steganography has led to the development of video stego-systems that are now available to the public. Many of these stego-systems have not yet been subjected to analysis or evaluation, and their capabilities for performing secure, practical, and effective video steganography are unknown. This thesis presents a comprehensive analysis of the state-of-the-art in practical video steganography. Video-based stego-systems are identified and examined using steganalytic techniques (system steganalysis) to determine the security practices of relevant stego-systems. The research in this thesis is conducted through a series of case studies that aim to provide novel insights in the field of steganalysis and its capabilities towards practical video steganography. The results of this work demonstrate the impact of system attacks over the practical state-of-the-art in video steganography. Through this research, it is evident that video-based stego-systems are highly vulnerable and fail to follow many of the well-understood security practices in the field. Consequently, it is possible to confidently detect each stego-system with a high rate of accuracy. As a result of this research, it is clear that current work in practical video steganography demonstrates a failure to address key principles and best practices in the field. Continued efforts to address this will provide safe and secure steganographic technologies

    Digital Image Processing

    Get PDF
    Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding. Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application. The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable. Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-
    corecore