Cloud gaming is a multi-billion dollar industry. A client in cloud gaming
sends its movement to the game server on the Internet, which renders and
transmits the resulting video back. In order to provide a good gaming
experience, a latency below 80 ms is required. This means that video rendering,
encoding, transmission, decoding, and display have to finish within that time
frame, which is especially challenging to achieve due to server overload,
network congestion, and losses. In this paper, we propose a new method for
recovering lost or corrupted video frames in cloud gaming. Unlike traditional
video frame recovery, our approach uses game states to significantly enhance
recovery accuracy and utilizes partially decoded frames to recover lost
portions. We develop a holistic system that consists of (i) efficiently
extracting game states, (ii) modifying H.264 video decoder to generate a mask
to indicate which portions of video frames need recovery, and (iii) designing a
novel neural network to recover either complete or partial video frames. Our
approach is extensively evaluated using iPhone 12 and laptop implementations,
and we demonstrate the utility of game states in the game video recovery and
the effectiveness of our overall design