407 research outputs found
Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information
We consider the -user successive refinement problem with causal decoder
side information and derive an exponential strong converse theorem. The
rate-distortion region for the problem can be derived as a straightforward
extension of the two-user case by Maor and Merhav (2008). We show that for any
rate-distortion tuple outside the rate-distortion region of the -user
successive refinement problem with causal decoder side information, the joint
excess-distortion probability approaches one exponentially fast. Our proof
follows by judiciously adapting the recently proposed strong converse technique
by Oohama using the information spectrum method, the variational form of the
rate-distortion region and H\"older's inequality. The lossy source coding
problem with causal decoder side information considered by El Gamal and
Weissman is a special case () of the current problem. Therefore, the
exponential strong converse theorem for the El Gamal and Weissman problem
follows as a corollary of our result
Exact Moderate Deviation Asymptotics in Streaming Data Transmission
In this paper, a streaming transmission setup is considered where an encoder
observes a new message in the beginning of each block and a decoder
sequentially decodes each message after a delay of blocks. In this
streaming setup, the fundamental interplay between the coding rate, the error
probability, and the blocklength in the moderate deviations regime is studied.
For output symmetric channels, the moderate deviations constant is shown to
improve over the block coding or non-streaming setup by exactly a factor of
for a certain range of moderate deviations scalings. For the converse proof, a
more powerful decoder to which some extra information is fedforward is assumed.
The error probability is bounded first for an auxiliary channel and this result
is translated back to the original channel by using a newly developed
change-of-measure lemma, where the speed of decay of the remainder term in the
exponent is carefully characterized. For the achievability proof, a known
coding technique that involves a joint encoding and decoding of fresh and past
messages is applied with some manipulations in the error analysis.Comment: 23 pages, 1 figure, 1 table, Submitted to IEEE Transactions on
Information Theor
Successive Refinement of Shannon Cipher System Under Maximal Leakage
We study the successive refinement setting of Shannon cipher system (SCS)
under the maximal leakage constraint for discrete memoryless sources under
bounded distortion measures. Specifically, we generalize the threat model for
the point-to-point rate-distortion setting of Issa, Wagner and Kamath (T-IT
2020) to the multiterminal successive refinement setting. Under mild conditions
that correspond to partial secrecy, we characterize the asymptotically optimal
normalized maximal leakage region for both the joint excess-distortion
probability (JEP) and the expected distortion reliability constraints. Under
JEP, in the achievability part, we propose a type-based coding scheme, analyze
the reliability guarantee for JEP and bound the leakage of the information
source through compressed versions. In the converse part, by analyzing a
guessing scheme of the eavesdropper, we prove the optimality of our
achievability result. Under expected distortion, the achievability part is
established similarly to the JEP counterpart. The converse proof proceeds by
generalizing the corresponding results for the rate-distortion setting of SCS
by Schieler and Cuff (T-IT 2014) to the successive refinement setting. Somewhat
surprisingly, the normalized maximal leakage regions under both JEP and
expected distortion constraints are identical under certain conditions,
although JEP appears to be a stronger reliability constraint
Successive Refinement of Abstract Sources
In successive refinement of information, the decoder refines its
representation of the source progressively as it receives more encoded bits.
The rate-distortion region of successive refinement describes the minimum rates
required to attain the target distortions at each decoding stage. In this
paper, we derive a parametric characterization of the rate-distortion region
for successive refinement of abstract sources. Our characterization extends
Csiszar's result to successive refinement, and generalizes a result by Tuncel
and Rose, applicable for finite alphabet sources, to abstract sources. This
characterization spawns a family of outer bounds to the rate-distortion region.
It also enables an iterative algorithm for computing the rate-distortion
region, which generalizes Blahut's algorithm to successive refinement. Finally,
it leads a new nonasymptotic converse bound. In all the scenarios where the
dispersion is known, this bound is second-order optimal.
In our proof technique, we avoid Karush-Kuhn-Tucker conditions of optimality,
and we use basic tools of probability theory. We leverage the Donsker-Varadhan
lemma for the minimization of relative entropy on abstract probability spaces.Comment: Extended version of a paper presented at ISIT 201
Consistent estimation of high-dimensional factor models when the factor number is over-estimated
A high-dimensional -factor model for an -dimensional vector time series
is characterised by the presence of a large eigengap (increasing with )
between the -th and the -th largest eigenvalues of the covariance
matrix. Consequently, Principal Component (PC) analysis is the most popular
estimation method for factor models and its consistency, when is correctly
estimated, is well-established in the literature. However, popular factor
number estimators often suffer from the lack of an obvious eigengap in
empirical eigenvalues and tend to over-estimate due, for example, to the
existence of non-pervasive factors affecting only a subset of the series. We
show that the errors in the PC estimators resulting from the over-estimation of
are non-negligible, which in turn lead to the violation of the conditions
required for factor-based large covariance estimation. To remedy this, we
propose new estimators of the factor model based on scaling the entries of the
sample eigenvectors. We show both theoretically and numerically that the
proposed estimators successfully control for the over-estimation error, and
investigate their performance when applied to risk minimisation of a portfolio
of financial time series
The distribution of height and diameter in random non-plane binary trees
This study is dedicated to precise distributional analyses of the height of
non-plane unlabelled binary trees ("Otter trees"), when trees of a given size
are taken with equal likelihood. The height of a rooted tree of size is
proved to admit a limiting theta distribution, both in a central and local
sense, as well as obey moderate as well as large deviations estimates. The
approximations obtained for height also yield the limiting distribution of the
diameter of unrooted trees. The proofs rely on a precise analysis, in the
complex plane and near singularities, of generating functions associated with
trees of bounded height
- …