121,053 research outputs found
‘Codes are not enough…’: a report of ongoing research
We consider the problem of rate allocation in a fading Gaussian
multiple-access channel (MAC) with fixed transmission powers. Our goal is to
maximize a general concave utility function of transmission rates over the
throughput capacity region. In contrast to earlier works in this context that
propose solutions where a potentially complex optimization problem must be
solved in every decision instant, we propose a low-complexity approximate rate
allocation policy and analyze the effect of temporal channel variations on its
utility performance. To the best of our knowledge, this is the first work that
studies the tracking capabilities of an approximate rate allocation scheme
under fading channel conditions. We build on an earlier work to present a new
rate allocation policy for a fading MAC that implements a low-complexity
approximate gradient projection iteration for each channel measurement, and
explicitly characterize the effect of the speed of temporal channel variations
on the tracking neighborhood of our policy. We further improve our results by
proposing an alternative rate allocation policy for which tighter bounds on the
size of the tracking neighborhood are derived. These proposed rate allocation
policies are computationally efficient in our setting since they implement a
single gradient projection iteration per channel measurement and each such
iteration relies on approximate projections which has polynomial-complexity in
the number of users.Comment: 9 pages, In proc. of ITA 200
On the Performance Bounds of some Policy Search Dynamic Programming Algorithms
We consider the infinite-horizon discounted optimal control problem
formalized by Markov Decision Processes. We focus on Policy Search algorithms,
that compute an approximately optimal policy by following the standard Policy
Iteration (PI) scheme via an -approximate greedy operator (Kakade and Langford,
2002; Lazaric et al., 2010). We describe existing and a few new performance
bounds for Direct Policy Iteration (DPI) (Lagoudakis and Parr, 2003; Fern et
al., 2006; Lazaric et al., 2010) and Conservative Policy Iteration (CPI)
(Kakade and Langford, 2002). By paying a particular attention to the
concentrability constants involved in such guarantees, we notably argue that
the guarantee of CPI is much better than that of DPI, but this comes at the
cost of a relative--exponential in -- increase of time
complexity. We then describe an algorithm, Non-Stationary Direct Policy
Iteration (NSDPI), that can either be seen as 1) a variation of Policy Search
by Dynamic Programming by Bagnell et al. (2003) to the infinite horizon
situation or 2) a simplified version of the Non-Stationary PI with growing
period of Scherrer and Lesner (2012). We provide an analysis of this algorithm,
that shows in particular that it enjoys the best of both worlds: its
performance guarantee is similar to that of CPI, but within a time complexity
similar to that of DPI
On the Complexity of Value Iteration
Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal n-step payoff by iterating n times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon n. In this paper, we settle the computational complexity of value iteration. We show that, given a horizon n in binary and an MDP, computing an optimal policy is EXPTIME-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. To obtain this main result, we develop several stepping stones that yield results of an independent interest. For instance, we show that it is EXPTIME-complete to compute the n-fold iteration (with n in binary) of a function given by a straight-line program over the integers with max and + as operators. We also provide new complexity results for the bounded halting problem in linear-update counter machines
- …