1,302 research outputs found
Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control
The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability has lead to its adoption in many non-graphics applications, including wide variety of scientific computing fields. At the same time, a number of important dynamic optimal policy problems in economics are athirst of computing power to help overcome dual curses of complexity and dimensionality. We investigate if computational economics may benefit from new tools on a case study of imperfect information dynamic programming problem with learning and experimentation trade-off that is, a choice between controlling the policy target and learning system parameters. Specifically, we use a model of active learning and control of linear autoregression with unknown slope that appeared in a variety of macroeconomic policy and other contexts. The endogeneity of posterior beliefs makes the problem difficult in that the value function need not be convex and policy function need not be continuous. This complication makes the problem a suitable target for massively-parallel computation using graphics processors. Our findings are cautiously optimistic in that new tools let us easily achieve a factor of 15 performance gain relative to an implementation targeting single-core processors and thus establish a better reference point on the computational speed vs. coding complexity trade-off frontier. While further gains and wider applicability may lie behind steep learning barrier, we argue that the future of many computations belong to parallel algorithms anyway.Graphics Processing Units, CUDA programming, Dynamic programming, Learning, Experimentation
Financial simulations on a massively parallel connection machine
Includes bibliographical references (p. 23-24).by James M. Hutchinson & Stavros A. Zenios
Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control
The rapid increase in the performance of graphics hardware, coupled
with recent improvements in its programmability has lead to its adoption in many
non-graphics applications, including wide variety of scientific computing fields.
At the same time, a number of important dynamic optimal policy problems in economics
are athirst of computing power to help overcome dual curses of complexity
and dimensionality. We investigate if computational economics may benefit from
new tools on a case study of imperfect information dynamic programming problem
with learning and experimentation trade-off that is, a choice between controlling
the policy target and learning system parameters. Specifically, we use a
model of active learning and control of linear autoregression with unknown slope
that appeared in a variety of macroeconomic policy and other contexts. The endogeneity
of posterior beliefs makes the problem difficult in that the value function
need not be convex and policy function need not be continuous. This complication
makes the problem a suitable target for massively-parallel computation using
graphics processors. Our findings are cautiously optimistic in that new tools let
us easily achieve a factor of 15 performance gain relative to an implementation
targeting single-core processors and thus establish a better reference point on the
computational speed vs. coding complexity trade-off frontier. While further gains
and wider applicability may lie behind steep learning barrier, we argue that the
future of many computations belong to parallel algorithms anyway
A Fast Causal Profiler for Task Parallel Programs
This paper proposes TASKPROF, a profiler that identifies parallelism
bottlenecks in task parallel programs. It leverages the structure of a task
parallel execution to perform fine-grained attribution of work to various parts
of the program. TASKPROF's use of hardware performance counters to perform
fine-grained measurements minimizes perturbation. TASKPROF's profile execution
runs in parallel using multi-cores. TASKPROF's causal profile enables users to
estimate improvements in parallelism when a region of code is optimized even
when concrete optimizations are not yet known. We have used TASKPROF to isolate
parallelism bottlenecks in twenty three applications that use the Intel
Threading Building Blocks library. We have designed parallelization techniques
in five applications to in- crease parallelism by an order of magnitude using
TASKPROF. Our user study indicates that developers are able to isolate
performance bottlenecks with ease using TASKPROF.Comment: 11 page
Computational Complexity and Parallelization in Bayesian Econometric Analysis
Challenging statements have appeared in recent years in the literature on advances in computational procedures.[...
A general framework for pricing Asian options under stochastic volatility on parallel architectures
In this paper, we present a transform-based algorithm for pricing discretely monitored arithmetic Asian options with remarkable accuracy in a general stochastic volatility framework, including affine models and time-changed Lévy processes. The accuracy is justified both theoretically and experimentally. In addition, to speed up the valuation process, we employ high-performance computing technologies. More specifically, we develop a parallel option pricing system that can be easily reproduced on parallel computers, also realized as a cluster of personal computers. Numerical results showing the accuracy, speed and efficiency of the procedure are reported in the paper
- …