10,727 research outputs found
ICface: Interpretable and Controllable Face Reenactment Using GANs
This paper presents a generic face animator that is able to control the pose
and expressions of a given face image. The animation is driven by human
interpretable control signals consisting of head pose angles and the Action
Unit (AU) values. The control information can be obtained from multiple sources
including external driving videos and manual controls. Due to the interpretable
nature of the driving signal, one can easily mix the information between
multiple sources (e.g. pose from one image and expression from another) and
apply selective post-production editing. The proposed face animator is
implemented as a two-stage neural network model that is learned in a
self-supervised manner using a large video collection. The proposed
Interpretable and Controllable face reenactment network (ICface) is compared to
the state-of-the-art neural network-based face animation techniques in multiple
tasks. The results indicate that ICface produces better visual quality while
being more versatile than most of the comparison methods. The introduced model
could provide a lightweight and easy to use tool for a multitude of advanced
image and video editing tasks.Comment: Accepted in WACV-202
The shudder of a cinephiliac idea? Videographic film studies practice as material thinking
Long after the advent of the digital era, while most university-based film studies academics still choose to publish their critical, theoretical and historical research in conventional written formats, a small but growing number of scholars working on the moving image have begun to explore the online publication possibilities of the digital video essay. This multimedia form has come to prominence in recent years in much Internet-based cinephile and film critical culture. In this article, I will consider, above all from a personal perspective looking back at two of the sixty or so videos that I have made, some of the possibilities that these processes offer for the production of new knowledge, forged out of the conjunction of the film object(s) to be studied, digital technologies of reproduction and editing tools, and the facticity of the researcher(s). I will argue that digital video is usefully seen not only as a promising communicative tool with different affordances than those of written text, but also as an important emergent cultural and phenomenological field for the creative practice of our work as film scholars
Using film cutting in interface design
It has been suggested that computer interfaces could be made more usable if their designers utilized cinematography techniques, which have evolved to guide
the viewer through a narrative despite frequent discontinuities in the presented scene (i.e., cuts between shots). Because of differences between the domains of
film and interface design, it is not straightforward to understand how such techniques can be transferred. May and Barnard (1995) argued that a psychological
model of watching film could support such a transference. This article presents an extended account of this model, which allows identification of the practice of collocation
of objects of interest in the same screen position before and after a cut. To verify that filmmakers do, in fact, use such techniques successfully, eye movements
were measured while participants watched the entirety of a commerciall
Movie Editing and Cognitive Event Segmentation in Virtual Reality Video
Traditional cinematography has relied for over a century on a
well-established set of editing rules, called continuity editing, to create a
sense of situational continuity. Despite massive changes in visual content
across cuts, viewers in general experience no trouble perceiving the
discontinuous flow of information as a coherent set of events. However, Virtual
Reality (VR) movies are intrinsically different from traditional movies in that
the viewer controls the camera orientation at all times. As a consequence,
common editing techniques that rely on camera orientations, zooms, etc., cannot
be used. In this paper we investigate key relevant questions to understand how
well traditional movie editing carries over to VR. To do so, we rely on recent
cognition studies and the event segmentation theory, which states that our
brains segment continuous actions into a series of discrete, meaningful events.
We first replicate one of these studies to assess whether the predictions of
such theory can be applied to VR. We next gather gaze data from viewers
watching VR videos containing different edits with varying parameters, and
provide the first systematic analysis of viewers' behavior and the perception
of continuity in VR. From this analysis we make a series of relevant findings;
for instance, our data suggests that predictions from the cognitive event
segmentation theory are useful guides for VR editing; that different types of
edits are equally well understood in terms of continuity; and that spatial
misalignments between regions of interest at the edit boundaries favor a more
exploratory behavior even after viewers have fixated on a new region of
interest. In addition, we propose a number of metrics to describe viewers'
attentional behavior in VR. We believe the insights derived from our work can
be useful as guidelines for VR content creation
- …