4,409 research outputs found
Slow Adaptive OFDMA Systems Through Chance Constrained Programming
Adaptive OFDMA has recently been recognized as a promising technique for
providing high spectral efficiency in future broadband wireless systems. The
research over the last decade on adaptive OFDMA systems has focused on adapting
the allocation of radio resources, such as subcarriers and power, to the
instantaneous channel conditions of all users. However, such "fast" adaptation
requires high computational complexity and excessive signaling overhead. This
hinders the deployment of adaptive OFDMA systems worldwide. This paper proposes
a slow adaptive OFDMA scheme, in which the subcarrier allocation is updated on
a much slower timescale than that of the fluctuation of instantaneous channel
conditions. Meanwhile, the data rate requirements of individual users are
accommodated on the fast timescale with high probability, thereby meeting the
requirements except occasional outage. Such an objective has a natural chance
constrained programming formulation, which is known to be intractable. To
circumvent this difficulty, we formulate safe tractable constraints for the
problem based on recent advances in chance constrained programming. We then
develop a polynomial-time algorithm for computing an optimal solution to the
reformulated problem. Our results show that the proposed slow adaptation scheme
drastically reduces both computational cost and control signaling overhead when
compared with the conventional fast adaptive OFDMA. Our work can be viewed as
an initial attempt to apply the chance constrained programming methodology to
wireless system designs. Given that most wireless systems can tolerate an
occasional dip in the quality of service, we hope that the proposed methodology
will find further applications in wireless communications
An interior-point and decomposition approach to multiple stage stochastic programming
There is no abstract of this report
Volumetric center method for stochastic convex programs using sampling
We develop an algorithm for solving the stochastic convex program (SCP) by combining Vaidya's volumetric center interior point method (VCM) for solving non-smooth convex programming problems with the Monte-Carlo sampling technique to compute a subgradient. A near-central cut variant of VCM is developed, and for this method an approach to perform bulk cut translation, and adding multiple cuts is given. We show that by using near-central VCM the SCP can be solved to a desirable accuracy with any given probability. For the two-stage SCP the solution time is independent of the number of scenarios
Decomposition and parallel processing techniques for two-time scale controlled Markov chains
This paper deals with a class of ergodic control problems
for systems described by Markov chains with
strong and weak interactions. These systems are composed
of a set of m subchains that are weakly coupled.
Using results recently established by Abbad et
al. one formulates a limit control problem the solution
of which can be obtained via an associated non-differentiable
convex programming (NDCP) problem. The
technique used to solve the NDCP problem is the Analytic
Center Cutting Plane Method (ACCPM) which
implements a dialogue between, on one hand, a master
program computing the analytical center of a localization
set containing the solution and, on the other hand,
an oracle proposing cutting planes that reduce the size
of the localization set at each main iteration. The interesting
aspect of this implementation comes from two
characteristics: (i) the oracle proposes cutting planes
by solving reduced sized Markov Decision Problems
(MDP) via a linear program (LP) or a policy iteration
method; (ii) several cutting planes can be proposed simultaneously
through a parallel implementation on m
processors. The paper concentrates on these two aspects
and shows, on a large scale MDP obtained from
the numerical approximation "a la Kushner-Dupuisā of
a singularly perturbed hybrid stochastic control problem,
the important computational speed-up obtained
From Cutting Planes Algorithms to Compression Schemes and Active Learning
Cutting-plane methods are well-studied localization(and optimization)
algorithms. We show that they provide a natural framework to perform
machinelearning ---and not just to solve optimization problems posed by
machinelearning--- in addition to their intended optimization use. In
particular, theyallow one to learn sparse classifiers and provide good
compression schemes.Moreover, we show that very little effort is required to
turn them intoeffective active learning methods. This last property provides a
generic way todesign a whole family of active learning algorithms from existing
passivemethods. We present numerical simulations testifying of the relevance
ofcutting-plane methods for passive and active learning tasks.Comment: IJCNN 2015, Jul 2015, Killarney, Ireland. 2015,
\<http://www.ijcnn.org/\&g
An interior-point and decomposition approach to multiple stage stochastic programming
There is no abstract of this repor
- ā¦