8,649 research outputs found
ISML: an interface specification meta-language
In this paper we present an abstract metaphor model situated within a model-based user interface framework. The inclusion of metaphors in graphical user interfaces is a well established, but mostly craft-based strategy to design. A substantial body of notations and tools can be found within the model-based user interface design literature, however an explicit treatment of metaphor and its mappings to other design views has yet to be addressed. We introduce the Interface Specification Meta-Language (ISML) framework and demonstrate its use in comparing the semantic and syntactic features of an interactive system. Challenges facing this research are outlined and further work proposed
We never go out of Style: Motion Disentanglement by Subspace Decomposition of Latent Space
Real-world objects perform complex motions that involve multiple independent
motion components. For example, while talking, a person continuously changes
their expressions, head, and body pose. In this work, we propose a novel method
to decompose motion in videos by using a pretrained image GAN model. We
discover disentangled motion subspaces in the latent space of widely used
style-based GAN models that are semantically meaningful and control a single
explainable motion component. The proposed method uses only a few
ground truth video sequences to obtain such subspaces. We extensively evaluate
the disentanglement properties of motion subspaces on face and car datasets,
quantitatively and qualitatively. Further, we present results for multiple
downstream tasks such as motion editing, and selective motion transfer, e.g.
transferring only facial expressions without training for it.Comment: AI for content creation, CVPRW-202
ICface: Interpretable and Controllable Face Reenactment Using GANs
This paper presents a generic face animator that is able to control the pose
and expressions of a given face image. The animation is driven by human
interpretable control signals consisting of head pose angles and the Action
Unit (AU) values. The control information can be obtained from multiple sources
including external driving videos and manual controls. Due to the interpretable
nature of the driving signal, one can easily mix the information between
multiple sources (e.g. pose from one image and expression from another) and
apply selective post-production editing. The proposed face animator is
implemented as a two-stage neural network model that is learned in a
self-supervised manner using a large video collection. The proposed
Interpretable and Controllable face reenactment network (ICface) is compared to
the state-of-the-art neural network-based face animation techniques in multiple
tasks. The results indicate that ICface produces better visual quality while
being more versatile than most of the comparison methods. The introduced model
could provide a lightweight and easy to use tool for a multitude of advanced
image and video editing tasks.Comment: Accepted in WACV-202
- ā¦