We describe a convergence acceleration technique for unconstrained
optimization problems. Our scheme computes estimates of the optimum from a
nonlinear average of the iterates produced by any optimization method. The
weights in this average are computed via a simple linear system, whose solution
can be updated online. This acceleration scheme runs in parallel to the base
algorithm, providing improved estimates of the solution on the fly, while the
original optimization method is running. Numerical experiments are detailed on
classical classification problems