9 research outputs found

    Survey on Reversible Data Hiding in Encrypted Images Using POB Histogram Method

    Get PDF
    This paper describes a survey on reversible data hiding in encrypted images. Data hiding is a process to embed useful data into cover media. Data invisibility is its major requirement. Data hiding can be done in audio, video, image, text, and picture. Here use an image for data hiding especially digital images and existing method (Histogram Block Shift Base Method) HBSBM or POB. Now a day's reversible data hiding in encrypted images is in use due to its excellent property which is original cover image can be recovered with no loss after extraction of the embedded data. Also, it protects the original data. According to the level and kind of application one or more data hiding methods is used. Data hiding can be done in audio, video, text, and image and other forms of information. Some data hiding techniques emphasize on digital image security, some on the robustness of digital image hiding process while other's main focus is on imperceptibility of a digital image. The capacity of digital information which has to hide is also the main concern in some of the applications. The objective of some of the papers mentioned below is to achieve two or more than two parameters i.e. Security, robustness, imperceptibility and capacity but some of the parameters are trade-off which means only one can be achieved on the cost of other. So the data hiding techniques aiming to achieve maximum requirements i.e. security, robustness, capacity, imperceptibility etc. and which can be utilized in the larger domain of applications is desired. Related work for techniques used for data hiding in a digital image is described in this paper

    Spectrally stable ink variability in a multi-primary printer

    Get PDF
    It was shown previously that a multi-ink printer can reproduce spectral reflectances within a specified tolerance range using many distinct ink combinations. An algorithm was developed to systematically analyze a printer to determine the amount of multi-ink variability throughout its spectral gamut. The advantage of this algorithm is that any spectral difference metric can be used as the objective function. Based on the results of the analysis for one spectral difference metric, six-dimensional density map displays were constructed to illustrate the amount of spectral redundancy throughout the ink space. One CMYKGO ink-jet printer was analyzed using spectral reflectance factor RMS as the spectral difference metric and selecting 0.02 RMS as the tolerance limit. For these parameters, the degree of spectral matching freedom for the printer reduced to five inks because the chromatic inks were able to reproduce spectra within the 0.02 tolerance limit throughout the printer\u27s gamut. Experiments were designed to exploit spectrally stable multi-ink variability within the analyzed printer. The first experiment used spectral redundancy to visually evaluate spectral difference metrics. Using the developed database of spectrally similar samples allows any spectral difference metric to be compared to a visual response. The second experiment demonstrated the impact of spectral redundancy on spectral color management. Typical color image processing techniques use profiles consisting of sparse multi-dimensional lookup tables that interpolate between adjacent nodes to prepare an image for rendering. It was shown that colorimetric error resulted when interpolating between lookup table nodes that were inconsistent in digital count space although spectrally similar. Finally, the analysis was used to enable spectral watermarking of images. To illustrate the significance of this watermarking technique, information was embedded into three images with varying levels of complexity. Prints were made verifying that information could be hidden while preserving the visual and spectral integrity of the original image

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    Image and Video Forensics

    Get PDF
    Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity

    Quality of Experience in Immersive Video Technologies

    Get PDF
    Over the last decades, several technological revolutions have impacted the television industry, such as the shifts from black & white to color and from standard to high-definition. Nevertheless, further considerable improvements can still be achieved to provide a better multimedia experience, for example with ultra-high-definition, high dynamic range & wide color gamut, or 3D. These so-called immersive technologies aim at providing better, more realistic, and emotionally stronger experiences. To measure quality of experience (QoE), subjective evaluation is the ultimate means since it relies on a pool of human subjects. However, reliable and meaningful results can only be obtained if experiments are properly designed and conducted following a strict methodology. In this thesis, we build a rigorous framework for subjective evaluation of new types of image and video content. We propose different procedures and analysis tools for measuring QoE in immersive technologies. As immersive technologies capture more information than conventional technologies, they have the ability to provide more details, enhanced depth perception, as well as better color, contrast, and brightness. To measure the impact of immersive technologies on the viewersâ QoE, we apply the proposed framework for designing experiments and analyzing collected subjectsâ ratings. We also analyze eye movements to study human visual attention during immersive content playback. Since immersive content carries more information than conventional content, efficient compression algorithms are needed for storage and transmission using existing infrastructures. To determine the required bandwidth for high-quality transmission of immersive content, we use the proposed framework to conduct meticulous evaluations of recent image and video codecs in the context of immersive technologies. Subjective evaluation is time consuming, expensive, and is not always feasible. Consequently, researchers have developed objective metrics to automatically predict quality. To measure the performance of objective metrics in assessing immersive content quality, we perform several in-depth benchmarks of state-of-the-art and commonly used objective metrics. For this aim, we use ground truth quality scores, which are collected under our subjective evaluation framework. To improve QoE, we propose different systems for stereoscopic and autostereoscopic 3D displays in particular. The proposed systems can help reducing the artifacts generated at the visualization stage, which impact picture quality, depth quality, and visual comfort. To demonstrate the effectiveness of these systems, we use the proposed framework to measure viewersâ preference between these systems and standard 2D & 3D modes. In summary, this thesis tackles the problems of measuring, predicting, and improving QoE in immersive technologies. To address these problems, we build a rigorous framework and we apply it through several in-depth investigations. We put essential concepts of multimedia QoE under this framework. These concepts not only are of fundamental nature, but also have shown their impact in very practical applications. In particular, the JPEG, MPEG, and VCEG standardization bodies have adopted these concepts to select technologies that were proposed for standardization and to validate the resulting standards in terms of compression efficiency

    <title>Embedding digital watermarks in halftone screens</title>

    No full text
    corecore