This paper considers a new class of deterministic finite-time horizon,
two-player, zero-sum differential games (DGs) in which the maximizing player is
allowed to take continuous and impulse controls whereas the minimizing player
is allowed to take impulse control only. We seek to approximate the value
function, and to provide a verification theorem for this class of DGs. We
first, by means of dynamic programming principle (DPP) in viscosity solution
(VS) framework, characterize the value function as the unique VS to the related
Hamilton-Jacobi-Bellman-Isaacs (HJBI) double-obstacle equation. Next, we prove
that an approximate value function exists, that it is the unique solution to an
approximate HJBI double-obstacle equation, and converges locally uniformly
towards the value function of each player when the time discretization step
goes to zero. Moreover, we provide a verification theorem which characterizes a
Nash-equilibrium for the DG control problem considered. Finally, by applying
our results, we derive a new continuous-time portfolio optimization model, and
we provide related computational algorithms.Comment: 38 page