Joint source-network encoding of object-based video is a very important and challenging research topic, which has not been adequately explored. In this paper, we propose a robust network-adaptive encoding approach for objectbased video. The framework jointly considers source coding, packet loss during transmission, and error concealment at the decoding. The proposed method guarantees the minimum expected distortion for the decoded video, by optimally allocating the shape and texture coding parameters at the encoder. The resulting optimization problem is solved by Lagrangian relaxation and dynamic programming. Experimental results demonstrate that the proposed method has significant gains over the non network-adaptive method
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.