195,660 research outputs found
Delay Constrained Scheduling over Fading Channels: Optimal Policies for Monomial Energy-Cost Functions
A point-to-point discrete-time scheduling problem of transmitting
information bits within hard delay deadline slots is considered assuming
that the underlying energy-bit cost function is a convex monomial. The
scheduling objective is to minimize the expected energy expenditure while
satisfying the deadline constraint based on information about the unserved
bits, channel state/statistics, and the remaining time slots to the deadline.
At each time slot, the scheduling decision is made without knowledge of future
channel state, and thus there is a tension between serving many bits when the
current channel is good versus leaving too many bits for the deadline. Under
the assumption that no other packet is scheduled concurrently and no outage is
allowed, we derive the optimal scheduling policy. Furthermore, we also
investigate the dual problem of maximizing the number of transmitted bits over
time slots when subject to an energy constraint.Comment: submitted to the IEEE ICC 200
Idempotent permutations
Together with a characteristic function, idempotent permutations uniquely
determine idempotent maps, as well as their linearly ordered arrangement
simultaneously. Furthermore, in-place linear time transformations are possible
between them. Hence, they may be important for succinct data structures,
information storing, sorting and searching.
In this study, their combinatorial interpretation is given and their
application on sorting is examined. Given an array of n integer keys each in
[1,n], if it is allowed to modify the keys in the range [-n,n], idempotent
permutations make it possible to obtain linearly ordered arrangement of the
keys in O(n) time using only 4log(n) bits, setting the theoretical lower bound
of time and space complexity of sorting. If it is not allowed to modify the
keys out of the range [1,n], then n+4log(n) bits are required where n of them
is used to tag some of the keys.Comment: 32 page
When Can Limited Randomness Be Used in Repeated Games?
The central result of classical game theory states that every finite normal
form game has a Nash equilibrium, provided that players are allowed to use
randomized (mixed) strategies. However, in practice, humans are known to be bad
at generating random-like sequences, and true random bits may be unavailable.
Even if the players have access to enough random bits for a single instance of
the game their randomness might be insufficient if the game is played many
times.
In this work, we ask whether randomness is necessary for equilibria to exist
in finitely repeated games. We show that for a large class of games containing
arbitrary two-player zero-sum games, approximate Nash equilibria of the
-stage repeated version of the game exist if and only if both players have
random bits. In contrast, we show that there exists a class of
games for which no equilibrium exists in pure strategies, yet the -stage
repeated version of the game has an exact Nash equilibrium in which each player
uses only a constant number of random bits.
When the players are assumed to be computationally bounded, if cryptographic
pseudorandom generators (or, equivalently, one-way functions) exist, then the
players can base their strategies on "random-like" sequences derived from only
a small number of truly random bits. We show that, in contrast, in repeated
two-player zero-sum games, if pseudorandom generators \emph{do not} exist, then
random bits remain necessary for equilibria to exist
Succinct Partial Sums and Fenwick Trees
We consider the well-studied partial sums problem in succint space where one
is to maintain an array of n k-bit integers subject to updates such that
partial sums queries can be efficiently answered. We present two succint
versions of the Fenwick Tree - which is known for its simplicity and
practicality. Our results hold in the encoding model where one is allowed to
reuse the space from the input data. Our main result is the first that only
requires nk + o(n) bits of space while still supporting sum/update in O(log_b
n) / O(b log_b n) time where 2 <= b <= log^O(1) n. The second result shows how
optimal time for sum/update can be achieved while only slightly increasing the
space usage to nk + o(nk) bits. Beyond Fenwick Trees, the results are primarily
based on bit-packing and sampling - making them very practical - and they also
allow for simple optimal parallelization
- âŠ