2,623 research outputs found
Recommended from our members
Software tools for stochastic programming: A Stochastic Programming Integrated Environment (SPInE)
SP models combine the paradigm of dynamic linear programming with
modelling of random parameters, providing optimal decisions which hedge
against future uncertainties. Advances in hardware as well as software
techniques and solution methods have made SP a viable optimisation tool.
We identify a growing need for modelling systems which support the creation
and investigation of SP problems. Our SPInE system integrates a number of
components which include a flexible modelling tool (based on stochastic
extensions of the algebraic modelling languages AMPL and MPL), stochastic
solvers, as well as special purpose scenario generators and database tools.
We introduce an asset/liability management model and illustrate how SPInE
can be used to create and process this model as a multistage SP application
Theoretical and Practical Advances on Smoothing for Extensive-Form Games
Sparse iterative methods, in particular first-order methods, are known to be
among the most effective in solving large-scale two-player zero-sum
extensive-form games. The convergence rates of these methods depend heavily on
the properties of the distance-generating function that they are based on. We
investigate the acceleration of first-order methods for solving extensive-form
games through better design of the dilated entropy function---a class of
distance-generating functions related to the domains associated with the
extensive-form games. By introducing a new weighting scheme for the dilated
entropy function, we develop the first distance-generating function for the
strategy spaces of sequential games that has no dependence on the branching
factor of the player. This result improves the convergence rate of several
first-order methods by a factor of , where is the branching
factor of the player, and is the depth of the game tree.
Thus far, counterfactual regret minimization methods have been faster in
practice, and more popular, than first-order methods despite their
theoretically inferior convergence rates. Using our new weighting scheme and
practical tuning we show that, for the first time, the excessive gap technique
can be made faster than the fastest counterfactual regret minimization
algorithm, CFR+, in practice
Three Puzzles on Mathematics, Computation, and Games
In this lecture I will talk about three mathematical puzzles involving
mathematics and computation that have preoccupied me over the years. The first
puzzle is to understand the amazing success of the simplex algorithm for linear
programming. The second puzzle is about errors made when votes are counted
during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure
A Hybrid Multi-GPU Implementation of Simplex Algorithm with CPU Collaboration
The simplex algorithm has been successfully used for many years in solving
linear programming (LP) problems. Due to the intensive computations required
(especially for the solution of large LP problems), parallel approaches have
also extensively been studied. The computational power provided by the modern
GPUs as well as the rapid development of multicore CPU systems have led OpenMP
and CUDA programming models to the top preferences during the last years.
However, the desired efficient collaboration between CPU and GPU through the
combined use of the above programming models is still considered a hard
research problem. In the above context, we demonstrate here an excessively
efficient implementation of standard simplex, targeting to the best possible
exploitation of the concurrent use of all the computing resources, on a
multicore platform with multiple CUDA-enabled GPUs. More concretely, we present
a novel hybrid collaboration scheme which is based on the concurrent execution
of suitably spread CPU-assigned (via multithreading) and GPU-offloaded
computations. The experimental results extracted through the cooperative use of
OpenMP and CUDA over a notably powerful modern hybrid platform (consisting of
32 cores and two high-spec GPUs, Titan Rtx and Rtx 2080Ti) highlight that the
performance of the presented here hybrid GPU/CPU collaboration scheme is
clearly superior to the GPU-only implementation under almost all conditions.
The corresponding measurements validate the value of using all resources
concurrently, even in the case of a multi-GPU configuration platform.
Furthermore, the given implementations are completely comparable (and slightly
superior in most cases) to other related attempts in the bibliography, and
clearly superior to the native CPU-implementation with 32 cores.Comment: 12 page
- …