We consider the weak convergence of numerical methods for stochastic
differential equations (SDEs). Weak convergence is usually expressed in terms
of the convergence of expected values of test functions of the trajectories.
Here we present an alternative formulation of weak convergence in terms of the
well-known Prokhorov metric on spaces of random variables. For a general class
of methods, we establish bounds on the rates of convergence in terms of the
Prokhorov metric. In doing so, we revisit the original proofs of weak
convergence and show explicitly how the bounds on the error depend on the
smoothness of the test functions. As an application of our result, we use the
Strassen - Dudley theorem to show that the numerical approximation and the true
solution to the system of SDEs can be re-embedded in a probability space in
such a way that the method converges there in a strong sense. One corollary of
this last result is that the method converges in the Wasserstein distance,
another metric on spaces of random variables. Another corollary establishes
rates of convergence for expected values of test functions assuming only local
Lipschitz continuity. We conclude with a review of the existing results for
pathwise convergence of weakly converging methods and the corresponding strong
results available under re-embedding.Comment: 12 pages, 2nd revision for IMA J Numerical Analysis. Further minor
errors correcte