Style-Consistent Video Augmentation that Maintains Visual Style and Subject Identity

Abstract

A media platform can receive client data from a client device including an image of a user’s face. A model framework can generate input data from the client data and provide the input data to a model trained to generate animation frames as output. The model can provide the generated animation frames back to the model framework, which may perform one or more postprocessing operations on the generated animation frames to generate animation data. The animation data is provided back to the client device

Similar works

This paper was published in Technical Disclosure Common.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

Licence: http://creativecommons.org/licenses/by/4.0/