556 research outputs found

    A Study of Data Security on E-Governance using Steganographic Optimization Algorithms

    Get PDF
    Steganography has been used massively in numerous fields to maintain the privacy and integrity of messages transferred via the internet. The need to secure the information has augmented with the increase in e-governance usage. The wide adoption of e-governance services also opens the doors to cybercriminals for fraudulent activities in cyberspace. To deal with these cybercrimes we need optimized and advanced steganographic techniques. Various advanced optimization techniques can be applied to steganography to obtain better results for the security of information. Various optimization techniques like particle swarm optimization and genetic algorithms with cryptography can be used to protect information for e-governance services. In this study, a comprehensive review of steganographic algorithms using optimization techniques is presented. A new perspective on using this technique to protect the information for e-governance is also presented. Deep Learning might be the area that can be used to automate the steganography process in combination with other method

    On the Sensor Pattern Noise Estimation in Image Forensics: A Systematic Empirical Evaluation

    Get PDF
    Extracting a fingerprint of a digital camera has fertile applications in image forensics, such as source camera identification and image authentication. In the last decade, Photo Response Non_Uniformity (PRNU) has been well established as a reliable unique fingerprint of digital imaging devices. The PRNU noise appears in every image as a very weak signal, and its reliable estimation is crucial for the success rate of the forensic application. In this paper, we present a novel methodical evaluation of 21 state-of-the-art PRNU estimation/enhancement techniques that have been proposed in the literature in various frameworks. The techniques are classified and systematically compared based on their role/stage in the PRNU estimation procedure, manifesting their intrinsic impacts. The performance of each technique is extensively demonstrated over a large-scale experiment to conclude this case-sensitive study. The experiments have been conducted on our created database and a public image database, the 'Dresden image databas

    Assessment of perceptual distortion boundary through applying reversible watermarking to brain MR images

    Get PDF
    The digital medical workflow faces many circumstances in which the images can be manipulated during viewing, extracting and exchanging. Reversible and imperceptible watermarking approaches have the potential to enhance trust within the medical imaging pipeline through ensuring the authenticity and integrity of the images to confirm that the changes can be detected and tracked. This study concentrates on the imperceptibility issue. Unlike reversibility, for which an objective assessment can be easily made, imperceptibility is a factor of human cognition that needs to be evaluated within the human context. By defining a perceptual boundary of detecting the modification, this study enables the formation of objective guidelines for the method of data encoding and level of image/pixel modification that translates to a specific watermark magnitude. This study implements a relative Visual Grading Analysis (VGA) evaluation of 117 brain MR images (8 original and 109 watermarked), modified by varying techniques and magnitude of image/pixel modification to determine where this perceptual boundary exists and relate the point at which change becomes noticeable to the objective measures of the image fidelity evaluation. The outcomes of the visual assessment were linked to the images Peak Signal to Noise Ratio (PSNR) values, thereby identifying the visual degradation threshold. The results suggest that, for watermarking applications, if a watermark is applied to the 512x512 pixel (16 bpp grayscale) images used in the study, a subsequent assessment of PSNR=82dB or greater would mean that there would be no reason to suspect that the watermark would be visually detectable. Keywords: Medical imaging; DICOM; Reversible Watermarking; Imperceptibility; Image Quality; Visual Grading Analysis

    Oblivious data hiding : a practical approach

    Get PDF
    This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed: 1. Theoretical and practical aspects of embedder-detector design. 2. Performance evaluation, and analysis of performance vs. complexity tradeoffs. 3. Some application specific implementations. A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure. The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work

    From Zero to Hero: Detecting Leaked Data through Synthetic Data Injection and Model Querying

    Full text link
    Safeguarding the Intellectual Property (IP) of data has become critically important as machine learning applications continue to proliferate, and their success heavily relies on the quality of training data. While various mechanisms exist to secure data during storage, transmission, and consumption, fewer studies have been developed to detect whether they are already leaked for model training without authorization. This issue is particularly challenging due to the absence of information and control over the training process conducted by potential attackers. In this paper, we concentrate on the domain of tabular data and introduce a novel methodology, Local Distribution Shifting Synthesis (\textsc{LDSS}), to detect leaked data that are used to train classification models. The core concept behind \textsc{LDSS} involves injecting a small volume of synthetic data--characterized by local shifts in class distribution--into the owner's dataset. This enables the effective identification of models trained on leaked data through model querying alone, as the synthetic data injection results in a pronounced disparity in the predictions of models trained on leaked and modified datasets. \textsc{LDSS} is \emph{model-oblivious} and hence compatible with a diverse range of classification models, such as Naive Bayes, Decision Tree, and Random Forest. We have conducted extensive experiments on seven types of classification models across five real-world datasets. The comprehensive results affirm the reliability, robustness, fidelity, security, and efficiency of \textsc{LDSS}.Comment: 13 pages, 11 figures, and 4 table

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
    corecore