562 research outputs found
End-to-End Error-Correcting Codes on Networks with Worst-Case Symbol Errors
The problem of coding for networks experiencing worst-case symbol errors is
considered. We argue that this is a reasonable model for highly dynamic
wireless network transmissions. We demonstrate that in this setup prior network
error-correcting schemes can be arbitrarily far from achieving the optimal
network throughput. A new transform metric for errors under the considered
model is proposed. Using this metric, we replicate many of the classical
results from coding theory. Specifically, we prove new Hamming-type,
Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity. A
commensurate lower bound is shown based on Gilbert-Varshamov-type codes for
error-correction. The GV codes used to attain the lower bound can be
non-coherent, that is, they do not require prior knowledge of the network
topology. We also propose a computationally-efficient concatenation scheme. The
rate achieved by our concatenated codes is characterized by a Zyablov-type
lower bound. We provide a generalized minimum-distance decoding algorithm which
decodes up to half the minimum distance of the concatenated codes. The
end-to-end nature of our design enables our codes to be overlaid on the
classical distributed random linear network codes [1]. Furthermore, the
potentially intensive computation at internal nodes for the link-by-link
error-correction is un-necessary based on our design.Comment: Submitted for publication. arXiv admin note: substantial text overlap
with arXiv:1108.239
File Updates Under Random/Arbitrary Insertions And Deletions
A client/encoder edits a file, as modeled by an insertion-deletion (InDel)
process. An old copy of the file is stored remotely at a data-centre/decoder,
and is also available to the client. We consider the problem of throughput- and
computationally-efficient communication from the client to the data-centre, to
enable the server to update its copy to the newly edited file. We study two
models for the source files/edit patterns: the random pre-edit sequence
left-to-right random InDel (RPES-LtRRID) process, and the arbitrary pre-edit
sequence arbitrary InDel (APES-AID) process. In both models, we consider the
regime in which the number of insertions/deletions is a small (but constant)
fraction of the original file. For both models we prove information-theoretic
lower bounds on the best possible compression rates that enable file updates.
Conversely, our compression algorithms use dynamic programming (DP) and entropy
coding, and achieve rates that are approximately optimal.Comment: The paper is an extended version of our paper to be appeared at ITW
201
A Comparative Research on Competitiveness of Information Industry of China vs. Korea
This paper explores the competitiveness of information industry of China and Korea by means of comparative research based on the analysis of statistic data and the definition of items denoting the competitiveness. Consequently, we analyze the competitive and complementary relation of information industry of China vs. Korea, and put forward a co-operation project of China-Korea information industry ultimately
The Capacity of Private Information Retrieval with Eavesdroppers
We consider the problem of private information retrieval (PIR) with colluding
servers and eavesdroppers (abbreviated as ETPIR). The ETPIR problem is
comprised of messages, servers where each server stores all
messages, a user who wants to retrieve one of the messages without
revealing the desired message index to any set of colluding servers, and an
eavesdropper who can listen to the queries and answers of any servers but
is prevented from learning any information about the messages. The information
theoretic capacity of ETPIR is defined to be the maximum number of desired
message symbols retrieved privately per information symbol downloaded. We show
that the capacity of ETPIR is
when , and when . To
achieve the capacity, the servers need to share a common random variable
(independent of the messages), and its size must be at least symbols per message symbol. Otherwise, with less amount of shared
common randomness, ETPIR is not feasible and the capacity reduces to zero.
An interesting observation is that the ETPIR capacity expression takes
different forms in two regimes. When , the capacity equals the inverse
of a sum of a geometric series with terms and decreases with ; this form
is typical for capacity expressions of PIR. When , the capacity does
not depend on , a typical form for capacity expressions of SPIR (symmetric
PIR, which further requires data-privacy, {\it i.e.,} the user learns no
information about other undesired messages); the capacity does not depend on
either. In addition, the ETPIR capacity result includes multiple previous
PIR and SPIR capacity results as special cases
On Gap-dependent Bounds for Offline Reinforcement Learning
This paper presents a systematic study on gap-dependent sample complexity in
offline reinforcement learning. Prior work showed when the density ratio
between an optimal policy and the behavior policy is upper bounded (the optimal
policy coverage assumption), then the agent can achieve an
rate, which is also minimax optimal. We
show under the optimal policy coverage assumption, the rate can be improved to
when there is a positive sub-optimality gap
in the optimal -function. Furthermore, we show when the visitation
probabilities of the behavior policy are uniformly lower bounded for states
where an optimal policy's visitation probabilities are positive (the uniform
optimal policy coverage assumption), the sample complexity of identifying an
optimal policy is independent of . Lastly, we present
nearly-matching lower bounds to complement our gap-dependent upper bounds.Comment: 33 pages, 1 figure, submitted to NeurIPS 202
- …