1,014 research outputs found

    Statistical Detection of LSB Matching Using Hypothesis Testing Theory

    Full text link
    This paper investigates the detection of information hidden by the Least Significant Bit (LSB) matching scheme. In a theoretical context of known image media parameters, two important results are presented. First, the use of hypothesis testing theory allows us to design the Most Powerful (MP) test. Second, a study of the MP test gives us the opportunity to analytically calculate its statistical performance in order to warrant a given probability of false-alarm. In practice when detecting LSB matching, the unknown image parameters have to be estimated. Based on the local estimator used in the Weighted Stego-image (WS) detector, a practical test is presented. A numerical comparison with state-of-the-art detectors shows the good performance of the proposed tests and highlights the relevance of the proposed methodology

    Extracting Tree-structures in CT data by Tracking Multiple Statistically Ranked Hypotheses

    Full text link
    In this work, we adapt a method based on multiple hypothesis tracking (MHT) that has been shown to give state-of-the-art vessel segmentation results in interactive settings, for the purpose of extracting trees. Regularly spaced tubular templates are fit to image data forming local hypotheses. These local hypotheses are used to construct the MHT tree, which is then traversed to make segmentation decisions. However, some critical parameters in this method are scale-dependent and have an adverse effect when tracking structures of varying dimensions. We propose to use statistical ranking of local hypotheses in constructing the MHT tree, which yields a probabilistic interpretation of scores across scales and helps alleviate the scale-dependence of MHT parameters. This enables our method to track trees starting from a single seed point. Our method is evaluated on chest CT data to extract airway trees and coronary arteries. In both cases, we show that our method performs significantly better than the original MHT method.Comment: Accepted for publication at the International Journal of Medical Physics and Practic

    Predictions and Outcomes for the Dynamics of Rotating Galaxies

    Full text link
    A review is given of a priori predictions made for the dynamics of rotating galaxies. One theory - MOND - has had many predictions corroborated by subsequent observations. While it is sometimes possible to offer post hoc explanations for these observations in terms of dark matter, it is seldom possible to use dark matter to predict the same phenomena.Comment: 36 pages (10 are references), 9 figures. Invited review for the Galaxies special Issue "Debate on the Physics of Galactic Rotation and the Existence of Dark Matter." Provides test cases for the importance of prior predictions in the application of the scientific metho

    Information similarity metrics in information security and forensics

    Get PDF
    We study two information similarity measures, relative entropy and the similarity metric, and methods for estimating them. Relative entropy can be readily estimated with existing algorithms based on compression. The similarity metric, based on algorithmic complexity, proves to be more difficult to estimate due to the fact that algorithmic complexity itself is not computable. We again turn to compression for estimating the similarity metric. Previous studies rely on the compression ratio as an indicator for choosing compressors to estimate the similarity metric. This assumption, however, is fundamentally flawed. We propose a new method to benchmark compressors for estimating the similarity metric. To demonstrate its use, we propose to quantify the security of a stegosystem using the similarity metric. Unlike other measures of steganographic security, the similarity metric is not only a true distance metric, but it is also universal in the sense that it is asymptotically minimal among all computable metrics between two objects. Therefore, it accounts for all similarities between two objects. In contrast, relative entropy, a widely accepted steganographic security definition, only takes into consideration the statistical similarity between two random variables. As an application, we present a general method for benchmarking stegosystems. The method is general in the sense that it is not restricted to any covertext medium and therefore, can be applied to a wide range of stegosystems. For demonstration, we analyze several image stegosystems using the newly proposed similarity metric as the security metric. The results show the true security limits of stegosystems regardless of the chosen security metric or the existence of steganalysis detectors. In other words, this makes it possible to show that a stegosystem with a large similarity metric is inherently insecure, even if it has not yet been broken

    Hunting wild stego images, a domain adaptation problem in digital image forensics

    Get PDF
    Digital image forensics is a field encompassing camera identication, forgery detection and steganalysis. Statistical modeling and machine learning have been successfully applied in the academic community of this maturing field. Still, large gaps exist between academic results and applications used by practicing forensic analysts, especially when the target samples are drawn from a different population than the data in a reference database. This thesis contains four published papers aiming at narrowing this gap in three different fields: mobile stego app detection, digital image steganalysis and camera identification. It is the first work to explore a way of extending the academic methods to real world images created by apps. New ideas and methods are developed for target images with very rich flexibility in the embedding rates, embedding algorithms, exposure settings and camera sources. The experimental results proved that the proposed methods work very well, even for the devices which are not included in the reference database

    Review of steganalysis of digital images

    Get PDF
    Steganography is the science and art of embedding hidden messages into cover multimedia such as text, image, audio and video. Steganalysis is the counterpart of steganography, which wants to identify if there is data hidden inside a digital medium. In this study, some specific steganographic schemes such as HUGO and LSB are studied and the steganalytic schemes developed to steganalyze the hidden message are studied. Furthermore, some new approaches such as deep learning and game theory, which have seldom been utilized in steganalysis before, are studied. In the rest of thesis study some steganalytic schemes using textural features including the LDP and LTP have been implemented

    Covert voice over internet protocol communications with packet loss based on fractal interpolation

    Get PDF
    The last few years have witnessed an explosive growth in the research of information hiding in multimedia objects, but few studies have taken into account packet loss in multimedia networks. As one of the most popular real-time services in the Internet, Voice over Internet Protocol (VoIP) contributes to a large part of network traffic for its advantages of real time, high flow, and low cost. So packet loss is inevitable in multimedia networks and affects the performance of VoIP communications. In this study, a fractal-based VoIP steganographic approach was proposed to realise covert VoIP communications in the presence of packet loss. In the proposed scheme, secret data to be hidden were divided into blocks after being encrypted with the block cipher, and each block of the secret data was then embedded into VoIP streaming packets. The VoIP packets went through a packet loss system based on Gilbert model which simulates a real network situation. And a prediction model based on fractal interpolation was built to decide whether a VoIP packet was suitable for data hiding. The experimental results indicated that the speech quality degradation increased with the escalating packet-loss level. The average variance of speech quality metrics (PESQ score) between the "no-embedding" speech samples and the “with-embedding” stego-speech samples was about 0.717, and the variances narrowed with the increasing packet-loss level. Both the average PESQ scores and the SNR values of stego-speech samples and the data retrieving rates had almost the same varying trends when the packet-loss level increased, indicating that the success rate of the fractal prediction model played an important role in the performance of covert VoIP communications

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions
    corecore