155 research outputs found
Relaxing the Gaussian AVC
The arbitrarily varying channel (AVC) is a conservative way of modeling an
unknown interference, and the corresponding capacity results are pessimistic.
We reconsider the Gaussian AVC by relaxing the classical model and thereby
weakening the adversarial nature of the interference. We examine three
different relaxations. First, we show how a very small amount of common
randomness between transmitter and receiver is sufficient to achieve the rates
of fully randomized codes. Second, akin to the dirty paper coding problem, we
study the impact of an additional interference known to the transmitter. We
provide partial capacity results that differ significantly from the standard
AVC. Third, we revisit a Gaussian MIMO AVC in which the interference is
arbitrary but of limited dimension.Comment: Submitted to the IEEE Transactions on Information Theor
A Practical Dirty Paper Coding Applicable for Broadcast Channel
In this paper, we present a practical dirty paper coding scheme using trellis
coded modulation for the dirty paper channel , where is white Gaussian noise with power , is the
average transmit power and is the Gaussian interference with power
that is non-causally known at the transmitter. We ensure that the
dirt in our scheme remains distinguishable to the receiver and thus, our
designed scheme is applicable to broadcast channel. Following Costa's idea, we
recognize the criteria that the transmit signal must be as orthogonal to the
dirt as possible. Finite constellation codes are constructed using trellis
coded modulation and by using a Viterbi algorithm at the encoder so that the
code satisfies the design criteria and simulation results are presented with
codes constructed via trellis coded modulation using QAM signal sets to
illustrate our results.Comment: 10 page
Trellis-coded quantization for public-key steganography
This paper deals with public-key steganography in the presence of a passive
warden. The aim is to hide secret messages within cover-documents without
making the warden suspicious, and without any preliminar secret key sharing.
Whereas a practical attempt has been already done to provide a solution to this
problem, it suffers of poor flexibility (since embedding and decoding steps
highly depend on cover-signals statistics) and of little capacity compared to
recent data hiding techniques. Using the same framework, this paper explores
the use of trellis-coded quantization techniques (TCQ and turbo TCQ) to design
a more efficient public-key scheme. Experiments on audio signals show great
improvements considering Cachin's security criterion.Comment: 4 pages, 5 figure
Communication in the Presence of a State-Aware Adversary
We study communication systems over state-dependent channels in the presence
of a malicious state-aware jamming adversary. The channel has a memoryless
state with an underlying distribution. The adversary introduces a jamming
signal into the channel. The state sequence is known non-causally to both the
encoder and the adversary. Taking an Arbitrarily Varying Channel (AVC)
approach, we consider two setups, namely, the discrete memoryless
Gel'fand-Pinsker (GP) AVC and the additive white Gaussian Dirty Paper (DP) AVC.
We determine the randomized coding capacity of both the AVCs under a maximum
probability of error criterion. Similar to other randomized coding setups, we
show that the capacity is the same even under the average probability of error
criterion. Even with non-causal knowledge of the state, we prove that the
state-aware adversary cannot affect the rate any worse than when it employs a
memoryless strategy which depends only on the instantaneous state. Thus, the
AVC capacity characterization is given in terms of the capacity of the worst
memoryless channels with state, induced by the adversary employing such
memoryless jamming strategies. For the DP-AVC, it is further shown that among
memoryless jamming strategies, none impact the communication more than a
memoryless Gaussian jamming strategy which completely disregards the knowledge
of the state. Thus, the capacity of the DP-AVC equals that of a standard AWGN
channel with two independent sources of additive white Gaussian noise, i.e.,
the channel noise and the jamming noise.Comment: 24 pages, 3 figure
Digital watermark technology in security applications
With the rising emphasis on security and the number of fraud related crimes
around the world, authorities are looking for new technologies to tighten
security of identity. Among many modern electronic technologies, digital
watermarking has unique advantages to enhance the document authenticity.
At the current status of the development, digital watermarking technologies
are not as matured as other competing technologies to support identity authentication
systems. This work presents improvements in performance of
two classes of digital watermarking techniques and investigates the issue of
watermark synchronisation.
Optimal performance can be obtained if the spreading sequences are designed
to be orthogonal to the cover vector. In this thesis, two classes of
orthogonalisation methods that generate binary sequences quasi-orthogonal
to the cover vector are presented. One method, namely "Sorting and Cancelling"
generates sequences that have a high level of orthogonality to the
cover vector. The Hadamard Matrix based orthogonalisation method, namely
"Hadamard Matrix Search" is able to realise overlapped embedding, thus the
watermarking capacity and image fidelity can be improved compared to using
short watermark sequences. The results are compared with traditional
pseudo-randomly generated binary sequences. The advantages of both classes
of orthogonalisation inethods are significant.
Another watermarking method that is introduced in the thesis is based
on writing-on-dirty-paper theory. The method is presented with biorthogonal
codes that have the best robustness. The advantage and trade-offs of
using biorthogonal codes with this watermark coding methods are analysed
comprehensively. The comparisons between orthogonal and non-orthogonal
codes that are used in this watermarking method are also made. It is found
that fidelity and robustness are contradictory and it is not possible to optimise
them simultaneously.
Comparisons are also made between all proposed methods. The comparisons
are focused on three major performance criteria, fidelity, capacity and
robustness. aom two different viewpoints, conclusions are not the same. For
fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal
codes has very strong advantage to preserve image fidelity and the advantage
of capacity performance is also significant. However, from the power
ratio point of view, the orthogonalisation methods demonstrate significant
advantage on capacity and robustness. The conclusions are contradictory
but together, they summarise the performance generated by different design
considerations.
The synchronisation of watermark is firstly provided by high contrast
frames around the watermarked image. The edge detection filters are used
to detect the high contrast borders of the captured image. By scanning
the pixels from the border to the centre, the locations of detected edges
are stored. The optimal linear regression algorithm is used to estimate the
watermarked image frames. Estimation of the regression function provides
rotation angle as the slope of the rotated frames. The scaling is corrected by
re-sampling the upright image to the original size. A theoretically studied
method that is able to synchronise captured image to sub-pixel level accuracy
is also presented. By using invariant transforms and the "symmetric
phase only matched filter" the captured image can be corrected accurately
to original geometric size. The method uses repeating watermarks to form an
array in the spatial domain of the watermarked image and the the array that
the locations of its elements can reveal information of rotation, translation
and scaling with two filtering processes
Near-capacity dirty-paper code design : a source-channel coding approach
This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity
Capacity and Random-Coding Exponents for Channel Coding with Side Information
Capacity formulas and random-coding exponents are derived for a generalized
family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic
upper bounds on the achievable log probability of error. In our model,
information is to be reliably transmitted through a noisy channel with finite
input and output alphabets and random state sequence, and the channel is
selected by a hypothetical adversary. Partial information about the state
sequence is available to the encoder, adversary, and decoder. The design of the
transmitter is subject to a cost constraint. Two families of channels are
considered: 1) compound discrete memoryless channels (CDMC), and 2) channels
with arbitrary memory, subject to an additive cost constraint, or more
generally to a hard constraint on the conditional type of the channel output
given the input. Both problems are closely connected. The random-coding
exponent is achieved using a stacked binning scheme and a maximum penalized
mutual information decoder, which may be thought of as an empirical generalized
Maximum a Posteriori decoder. For channels with arbitrary memory, the
random-coding exponents are larger than their CDMC counterparts. Applications
of this study include watermarking, data hiding, communication in presence of
partially known interferers, and problems such as broadcast channels, all of
which involve the fundamental idea of binning.Comment: to appear in IEEE Transactions on Information Theory, without
Appendices G and
Game-theoretic Analysis to Content-adaptive Reversible Watermarking
While many games were designed for steganography and robust watermarking, few
focused on reversible watermarking. We present a two-encoder game related to
the rate-distortion optimization of content-adaptive reversible watermarking.
In the game, Alice first hides a payload into a cover. Then, Bob hides another
payload into the modified cover. The embedding strategy of Alice affects the
embedding capacity of Bob. The embedding strategy of Bob may produce
data-extraction errors to Alice. Both want to embed as many pure secret bits as
possible, subjected to an upper-bounded distortion. We investigate
non-cooperative game and cooperative game between Alice and Bob. When they
cooperate with each other, one may consider them as a whole, i.e., an encoder
uses a cover for data embedding with two times. When they do not cooperate with
each other, the game corresponds to a separable system, i.e., both want to
independently hide a payload within the cover, but recovering the cover may
need cooperation. We find equilibrium strategies for both players under
constraints.Comment: 12 pages, 2 figure
Estimation of the Embedding Capacity in Pixel-pair based Watermarking Schemes
Estimation of the Embedding capacity is an important problem specifically in
reversible multi-pass watermarking and is required for analysis before any
image can be watermarked. In this paper, we propose an efficient method for
estimating the embedding capacity of a given cover image under multi-pass
embedding, without actually embedding the watermark. We demonstrate this for a
class of reversible watermarking schemes which operate on a disjoint group of
pixels, specifically for pixel pairs. The proposed algorithm iteratively
updates the co-occurrence matrix at every stage, to estimate the multi-pass
embedding capacity, and is much more efficient vis-a-vis actual watermarking.
We also suggest an extremely efficient, pre-computable tree based
implementation which is conceptually similar to the co-occurrence based method,
but provides the estimates in a single iteration, requiring a complexity akin
to that of single pass capacity estimation. We also provide bounds on the
embedding capacity. We finally show how our method can be easily used on a
number of watermarking algorithms and specifically evaluate the performance of
our algorithms on the benchmark watermarking schemes of Tian [11] and Coltuc
[6].Comment: This manuscript is submitted to Transactions of Image Processing, on
september 5th 201
On Polar Coding for Binary Dirty Paper
The problem of communication over binary dirty paper (DP) using nested polar
codes is considered. An improved scheme, focusing on low delay, short to
moderate blocklength communication is proposed. Successive cancellation list
(SCL) decoding with properly defined CRC is used for channel coding, and SCL
encoding without CRC is used for source coding. The performance is compared to
the best achievable rate of any coding scheme for binary DP using nested codes.
A well known problem with nested polar codes for binary DP is the existence of
frozen channel code bits that are not frozen in the source code. These bits
need to be retransmitted in a second phase of the scheme, thus reducing
transmission rate. We observe that the number of these bits is typically either
zero or a small number, and provide an improved analysis, compared to that
presented in the literature, on the size of this set and on its scaling with
respect to the blocklength when the power constraint parameter is sufficiently
large or the channel crossover probability sufficiently small.Comment: Accepted to ISIT 201
- …