1,615 research outputs found
Digital image watermarking: its formal model, fundamental properties and possible attacks
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes
Embedding Authentication and DistortionConcealment in Images – A Noisy Channel Perspective
In multimedia communication, compression of data is essential to improve transmission rate, and minimize storage space. At the same time, authentication of transmitted data is equally important to justify all these activities. The drawback of compression is that the compressed data are vulnerable to channel noise. In this paper, error concealment methodologies with ability of error detection and concealment are investigated for integration with image authentication in JPEG2000.The image authentication includes digital signature extraction and its diffusion as a watermark. To tackle noise, the error concealment technologies are modified to include edge information of the original image.This edge_image is transmitted along with JPEG2000 compressed image to determine corrupted coefficients and regions. The simulation results are conducted on test images for different values of bit error rate to judge confidence in noise reduction within the received images
Secure Healthcare Applications Data Storage in Cloud Using Signal Scrambling Method
A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet based applications, and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally whereas the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks
Thermalization after holographic bilocal quench
We study thermalization in the holographic (1+1)-dimensional CFT after
simultaneous generation of two high-energy excitations in the antipodal points
on the circle. The holographic picture of such quantum quench is the creation
of BTZ black hole from a collision of two massless particles. We perform
holographic computation of entanglement entropy and mutual information in the
boundary theory and analyze their evolution with time. We show that
equilibration of the entanglement in the regions which contained one of the
initial excitations is generally similar to that in other holographic quench
models, but with some important distinctions. We observe that entanglement
propagates along a sharp effective light cone from the points of initial
excitations on the boundary. The characteristics of entanglement propagation in
the global quench models such as entanglement velocity and the light cone
velocity also have a meaning in the bilocal quench scenario. We also observe
the loss of memory about the initial state during the equilibration process. We
find that the memory loss reflects on the time behavior of the entanglement
similarly to the global quench case, and it is related to the universal linear
growth of entanglement, which comes from the interior of the forming black
hole. We also analyze general two-point correlation functions in the framework
of the geodesic approximation, focusing on the study of the late time behavior.Comment: 75 pages, 41 figure, v2: typos corrected, references and minor
comments added, v3: published versio
HardSATGEN: Understanding the Difficulty of Hard SAT Formula Generation and A Strong Structure-Hardness-Aware Baseline
Industrial SAT formula generation is a critical yet challenging task.
Existing SAT generation approaches can hardly simultaneously capture the global
structural properties and maintain plausible computational hardness. We first
present an in-depth analysis for the limitation of previous learning methods in
reproducing the computational hardness of original instances, which may stem
from the inherent homogeneity in their adopted split-merge procedure. On top of
the observations that industrial formulae exhibit clear community structure and
oversplit substructures lead to the difficulty in semantic formation of logical
structures, we propose HardSATGEN, which introduces a fine-grained control
mechanism to the neural split-merge paradigm for SAT formula generation to
better recover the structural and computational properties of the industrial
benchmarks. Experiments including evaluations on private and practical
corporate testbed show the superiority of HardSATGEN being the only method to
successfully augment formulae maintaining similar computational hardness and
capturing the global structural properties simultaneously. Compared to the best
previous methods, the average performance gains achieve 38.5% in structural
statistics, 88.4% in computational metrics, and over 140.7% in the
effectiveness of guiding solver tuning by our generated instances. Source code
is available at http://github.com/Thinklab-SJTU/HardSATGENComment: Published at SIGKDD 2023, see
http://dl.acm.org/doi/10.1145/3580305.359983
Digital image watermarking: its formal model, fundamental properties and possible attacks
While formal definitions and security proofs are well established in some fields like cryptography and steganography, they are not as evident in digital watermarking research. A systematic development of watermarking schemes is desirable, but at present their development is usually informal, ad hoc, and omits the complete realization of application scenarios. This practice not only hinders the choice and use of a suitable scheme for a watermarking application, but also leads to debate about the state-of-the-art for different watermarking applications. With a view to the systematic development of watermarking schemes, we present a formal generic model for digital image watermarking. Considering possible inputs, outputs, and component functions, the initial construction of a basic watermarking model is developed further to incorporate the use of keys. On the basis of our proposed model, fundamental watermarking properties are defined and their importance exemplified for different image applications. We also define a set of possible attacks using our model showing different winning scenarios depending on the adversary capabilities. It is envisaged that with a proper consideration of watermarking properties and adversary actions in different image applications, use of the proposed model would allow a unified treatment of all practically meaningful variants of watermarking schemes
- …