Turn-based stochastic games and its important subclass Markov decision
processes (MDPs) provide models for systems with both probabilistic and
nondeterministic behaviors. We consider turn-based stochastic games with two
classical quantitative objectives: discounted-sum and long-run average
objectives. The game models and the quantitative objectives are widely used in
probabilistic verification, planning, optimal inventory control, network
protocol and performance analysis. Games and MDPs that model realistic systems
often have very large state spaces, and probabilistic abstraction techniques
are necessary to handle the state-space explosion. The commonly used
full-abstraction techniques do not yield space-savings for systems that have
many states with similar value, but does not necessarily have similar
transition structure. A semi-abstraction technique, namely Magnifying-lens
abstractions (MLA), that clusters states based on value only, disregarding
differences in their transition relation was proposed for qualitative
objectives (reachability and safety objectives). In this paper we extend the
MLA technique to solve stochastic games with discounted-sum and long-run
average objectives. We present the MLA technique based abstraction-refinement
algorithm for stochastic games and MDPs with discounted-sum objectives. For
long-run average objectives, our solution works for all MDPs and a sub-class of
stochastic games where every state has the same value