7,585 research outputs found
The Evolution of Stop-motion Animation Technique Through 120 Years of Technological Innovations
Stop-motion animation history has been put on paper by several scholars and practitioners who tried to organize 120 years of technological innovations and material experiments dealing with a huge literature. Bruce Holman (1975), Neil Pettigrew (1999), Ken Priebe (2010), Stefano Bessoni (2014), and more recently Adrián Encinas Salamanca (2017), provided the most detailed even tough partial attempts of systematization, and designed historical reconstructions by considering specific periods of time, film lengths or the use of stop-motion as special effect rather than an animation technique. This article provides another partial historical reconstruction of the evolution of stop-motion and outlines the main events that occurred in the development of this technique, following criteria based on the innovations in the technology of materials and manufacturing processes that have influenced the fabrication of puppets until the present day. The systematization follows a chronological order and takes into account events that changed the technique of a puppets’ manufacturing process as a consequence of the use of either new fabrication processes or materials. Starting from the accident that made the French film-pioneer Georges Méliès discover the trick of the replacement technique at the end of the nineteenth century, the reconstruction goes through 120 years of experiments and films. “Build up” puppets fabricated by the Russian puppet animator Ladislaw Starevicz with insect exoskeletons, the use of clay puppets and the innovations introduced by LAIKA entertainment in the last decade such as Stereoscopic photography and the 3D computer printed replacement pieces, and then the increasing influence of digital technologies in the process of puppet fabrication are some of the main considered events. Technology transfers, new materials’ features, innovations in the way of animating puppets, are the main aspects through which this historical analysis approaches the previously mentioned events. This short analysis is supposed to remind and demonstrate that stop-motion animation is an interdisciplinary occasion of both artistic expression and technological experimentation, and that its evolution and aesthetic is related to cultural, geographical and technological issues. Lastly, if the technology of materials and processes is a constantly evolving field, what future can be expected for this cinematographic technique? The article ends with this open question and without providing an answer it implicitly states the role of stop-motion as a driving force for innovations that come from other fields and are incentivized by the needs of this specific sector
Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
Management and display of four-dimensional environmental data sets using McIDAS
Over the past four years, great strides have been made in the areas of data management and display of 4-D meteorological data sets. A survey was conducted of available and planned 4-D meteorological data sources. The data types were evaluated for their impact on the data management and display system. The requirements were analyzed for data base management generated by the 4-D data display system. The suitability of the existing data base management procedures and file structure were evaluated in light of the new requirements. Where needed, new data base management tools and file procedures were designed and implemented. The quality of the basic 4-D data sets was assured. The interpolation and extrapolation techniques of the 4-D data were investigated. The 4-D data from various sources were combined to make a uniform and consistent data set for display purposes. Data display software was designed to create abstract line graphic 3-D displays. Realistic shaded 3-D displays were created. Animation routines for these displays were developed in order to produce a dynamic 4-D presentation. A prototype dynamic color stereo workstation was implemented. A computer functional design specification was produced based on interactive studies and user feedback
Recommended from our members
View-dependent adaptive cloth simulation
This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows, producing a 2Ă— speed-up for a single character and more than 4Ă— for a small group
Recommended from our members
A methodology for feature based 3D face modelling from photographs
In this paper, a new approach to modelling 3D faces based on 2D images is introduced. Here 3D faces are created using two photographs from which we extract facial features based on image manipulation techniques. Through the image manipulation techniques we extract the crucial feature lines of the face in two views. These are then used in modifying a template base mesh which is created in 3D. This base mesh, which has been designed by keeping facial animation in mind, is then subdivided to provide the level of detail required. The methodology, as it stands, is semi-automatic whereby our goal is to automate this process in order to provide an inexpensive and expedient way of producing realistic face models intended for animation purposes. Thus, we show how image manipulation techniques can be used to create binary images which can in turn be used in manipulating a base mesh that can be adapted to a given facial geometry. In order to explain our approach more clearly we discuss a series of examples where we create 3D facial geometry of individuals given the corresponding image data
Perceptual Evaluation of Video-Realistic Speech
abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination
- …