18 research outputs found

    A review and open issues of multifarious image steganography techniques in spatial domain

    Get PDF
    Nowadays, information hiding is becoming a helpful technique and fetch more attention due fast growth of using internet, it is applied for sending secret information by using different techniques. Steganography is one of major important technique in information hiding. Steganography is science of concealing the secure information within a carrier object to provide the secure communication though the internet, so that no one can recognize and detect it’s except the sender & receiver. In steganography, many various carrier formats can be used such as an image, video, protocol, audio. The digital image is most popular used as a carrier file due its frequency on internet. There are many techniques variable for image steganography, each has own strong and weak points. In this study, we conducted a review of image steganography in spatial domain to explore the term image steganography by reviewing, collecting, synthesizing and analyze the challenges of different studies which related to this area published from 2014 to 2017. The aims of this review is provides an overview of image steganography and comparison between approved studies are discussed according to the pixel selection, payload capacity and embedding algorithm to open important research issues in the future works and obtain a robust method

    An improved image steganography scheme based on distinction grade value and secret message encryption

    Get PDF
    Steganography is an emerging and greatly demanding technique for secure information communication over the internet using a secret cover object. It can be used for a wide range of applications such as safe circulation of secret data in intelligence, industry, health care, habitat, online voting, mobile banking and military. Commonly, digital images are used as covers for the steganography owing to their redundancy in the representation, making them hidden to the intruders, hackers, adversaries, unauthorized users. Still, any steganography system launched over the Internet can be cracked upon recognizing the stego cover. Thus, the undetectability that involves data imperceptibility or concealment and security is the significant trait of any steganography system. Presently, the design and development of an effective image steganography system are facing several challenges including low capacity, poor robustness and imperceptibility. To surmount such limitations, it is important to improve the capacity and security of the steganography system while maintaining a high signal-to-noise ratio (PSNR). Based on these factors, this study is aimed to design and develop a distinction grade value (DGV) method to effectively embed the secret data into a cover image for achieving a robust steganography scheme. The design and implementation of the proposed scheme involved three phases. First, a new encryption method called the shuffle the segments of secret message (SSSM) was incorporated with an enhanced Huffman compression algorithm to improve the text security and payload capacity of the scheme. Second, the Fibonacci-based image transformation decomposition method was used to extend the pixel's bit from 8 to 12 for improving the robustness of the scheme. Third, an improved embedding method was utilized by integrating a random block/pixel selection with the DGV and implicit secret key generation for enhancing the imperceptibility of the scheme. The performance of the proposed scheme was assessed experimentally to determine the imperceptibility, security, robustness and capacity. The standard USC-SIPI images dataset were used as the benchmarking for the performance evaluation and comparison of the proposed scheme with the previous works. The resistance of the proposed scheme was tested against the statistical, X2 , Histogram and non-structural steganalysis detection attacks. The obtained PSNR values revealed the accomplishment of higher imperceptibility and security by the proposed DGV scheme while a higher capacity compared to previous works. In short, the proposed steganography scheme outperformed the commercially available data hiding schemes, thereby resolved the existing issues

    Mobile app with steganography functionalities

    Get PDF
    [Abstract]: Steganography is the practice of hiding information within other data, such as images, audios, videos, etc. In this research, we consider applying this useful technique to create a mobile application that lets users conceal their own secret data inside other media formats, send that encoded data to other users, and even perform analysis to images that may have been under a steganography attack. For image steganography, lossless compression formats employ Least Significant Bit (LSB) encoding within Red Green Blue (RGB) pixel values. Reciprocally, lossy compression formats, such as JPEG, utilize data concealment in the frequency domain by altering the quantized matrices of the files. Video steganography follows two similar methods. In lossless video formats that permit compression, the LSB approach is applied to the RGB pixel values of individual frames. Meanwhile, in lossy High Efficient Video Coding (HEVC) formats, a displaced bit modification technique is used with the YUV components.[Resumo]: A esteganografía é a práctica de ocultar determinada información dentro doutros datos, como imaxes, audio, vídeos, etc. Neste proxecto pretendemos aplicar esta técnica como visión para crear unha aplicación móbil que permita aos usuarios ocultar os seus propios datos secretos dentro doutros formatos multimedia, enviar eses datos cifrados a outros usuarios e mesmo realizar análises de imaxes que puidesen ter sido comprometidas por un ataque esteganográfico. Para a esteganografía de imaxes, os formatos con compresión sen perdas empregan a codificación Least Significant Bit (LSB) dentro dos valores Red Green Blue (RGB) dos seus píxeles. Por outra banda, os formatos de compresión con perdas, como JPEG, usan a ocultación de datos no dominio de frecuencia modificando as matrices cuantificadas dos ficheiros. A esteganografía de vídeo segue dous métodos similares. En formatos de vídeo sen perdas, o método LSB aplícase aos valores RGB de píxeles individuais de cadros. En cambio, nos formatos High Efficient Video Coding (HEVC) con compresión con perdas, úsase unha técnica de cambio de bits nos compoñentes YUV.Traballo fin de grao (UDC.FIC). Enxeñaría Informática. Curso 2022/202

    Data Hiding and Its Applications

    Get PDF
    Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    Proceedings of ICMMB2014

    Get PDF

    Multi-objective Robust Machine Learning For Critical Systems With Scarce Data

    Get PDF
    With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In particular, the most critical and fundamental technologies that handle our economic systems, transportation, health, and even privacy. However, while these systems are becoming more effective, their complexity inherently decreases our ability to understand, test, and assess the dependability and trustworthiness of these systems. This problem becomes even more challenging under a multi-objective framework: When the ML model is required to learn multiple tasks together, behave under constrained inputs or fulfill contradicting concomitant objectives. Our dissertation focuses on the context of robust ML under limited training data, i.e., use cases where it is costly to collect additional training data and/or label it. We will study this topic under the prism of three real use cases: Fraud detection, pandemic forecasting, and chest x-ray diagnosis. Each use-case covers one of the challenges of robust ML with limited data, (1) robustness to imperceptible perturbations, or (2) robustness to confounding variables. We provide a study of the challenges for each case and propose novel techniques to achieve robust learning. As the first contribution of this dissertation, we collaborate with BGL BNP Paribas. We demonstrate that their overdraft and fraud detection systems are prima facie robust to adversarial attacks because of the complexity of their feature engineering and domain constraints. However, we show that gray-box attacks that take into account domain knowledge can easily break their defense. We propose, CoEva2 adversarial fine-tuning, a new defense mechanism based on multi-objective evolutionary algorithms to augment the training data and mitigate the system’s vulnerabilities. Next, we investigate how domain knowledge can protect against adversarial attacks through multi-task learning. We show that adding domain constraints in the form of additional tasks can significantly improve the robustness of models to adversarial attacks, particularly for the robot navigation use case. We propose a new set of adaptive attacks and demonstrate that adversarial training combined with such attacks can improve robustness. While the raw data available in the BGL or Robot Navigation is vast, it is heavily cleaned, feature-engineered, and annotated by domain experts (which are expensive), and the end training data is scarce. In contrast, raw data is scarce when dealing with an outbreak, and designing robust ML systems to predict, forecast, and recommend mitigation policies is challenging. In particular, for small countries like Luxembourg. Contrary to common techniques that forecast new cases based on previous data in time series, we propose a novel surrogate-based optimization as an integrated loop. It combines a neural network prediction of the infection rate based on mobility attributes and a model-based simulation that predicts the cases and deaths. Our approach has been used by the Luxembourg government’s task force and has been recognized with a best paper award at KDD2020. Our following work focuses on the challenges that pose cofounding factors to the robustness and generalization of Chest X-ray (CXR) classification. We first investigate the robustness and generalization of multi-task models, then demonstrate that multi-task learning, leveraging the cofounding variables, can significantly improve the generalization and robustness of CXR classification models. Our results suggest that task augmentation with additional knowledge (like extraneous variables) outperforms state-of-art data augmentation techniques in improving test and robust performances. Overall, this dissertation provides insights into the importance of domain knowledge in the robustness and generalization of models. It shows that instead of building data-hungry ML models, particularly for critical systems, a better understanding of the system as a whole and its domain constraints yields improved robustness and generalization performances. This dissertation also proposes theorems, algorithms, and frameworks to effectively assess and improve the robustness of ML systems for real-world cases and applications
    corecore