In an optimal control framework, we consider the value VT​(x) of the
problem starting from state x with finite horizon T, as well as the value
Vλ​(x) of the λ-discounted problem starting from x. We prove
that uniform convergence (on the set of states) of the values VT​(⋅) as
T tends to infinity is equivalent to uniform convergence of the values
Vλ​(⋅) as λ tends to 0, and that the limits are identical.
An example is also provided to show that the result does not hold for pointwise
convergence. This work is an extension, using similar techniques, of a related
result in a discrete-time framework \cite{LehSys}.Comment: 14 page