5 research outputs found
A Parallel Dual Fast Gradient Method for MPC Applications
We propose a parallel adaptive constraint-tightening approach to solve a
linear model predictive control problem for discrete-time systems, based on
inexact numerical optimization algorithms and operator splitting methods. The
underlying algorithm first splits the original problem in as many independent
subproblems as the length of the prediction horizon. Then, our algorithm
computes a solution for these subproblems in parallel by exploiting auxiliary
tightened subproblems in order to certify the control law in terms of
suboptimality and recursive feasibility, along with closed-loop stability of
the controlled system. Compared to prior approaches based on constraint
tightening, our algorithm computes the tightening parameter for each subproblem
to handle the propagation of errors introduced by the parallelization of the
original problem. Our simulations show the computational benefits of the
parallelization with positive impacts on performance and numerical conditioning
when compared with a recent nonparallel adaptive tightening scheme.Comment: This technical report is an extended version of the paper "A Parallel
Dual Fast Gradient Method for MPC Applications" by the same authors submitted
to the 54th IEEE Conference on Decision and Contro
Reducing Parallel Overheads Through Dynamic Serialization
If parallelism can be successfully exploited in a program, significant reductions in execution time can be achieved. However, if sections of the code are dominated by parallel overheads, the overall program performance can degrade. We propose a framework, based on an inspector-executor model, for identifying loops that are dominated by parallel overheads and dynamically serializing these loops. We implement this framework in the Polaris parallelizing compiler and evaluate two portable methods for classifying loops as profitable or unprofitable. We show that for six benchmark programs from the Perfect Club and SPEC 95 suites, parallel program execution times can be improved by as much as 85% on 16 processors of an Origin 2000. 1 Introduction Identifying parallelism in a program is only a first step in generating efficient parallel code. In a previous study, we have found that even well-structured parallel applications, on current shared-memory machines, may run slower than their serial..
Reducing Parallel Overheads Through Dynamic Serialization
If parallelism can be successfully exploited in a program, signi cant reductions in execution time can be achieved. However, if sections of the code are dominated by parallel overheads, the overall program performance can degrade. We propose a framework, based on an inspector-executor model, for identifying loops that are dominated by parallel overheads and dynamically serializing these loops. We implement this framework in the Polaris parallelizing compiler and evaluate two portable methods for classifying loops as pro table or unpro table. We show that for si