452 research outputs found

    The Evolution of Stop-motion Animation Technique Through 120 Years of Technological Innovations

    Get PDF
    Stop-motion animation history has been put on paper by several scholars and practitioners who tried to organize 120 years of technological innovations and material experiments dealing with a huge literature. Bruce Holman (1975), Neil Pettigrew (1999), Ken Priebe (2010), Stefano Bessoni (2014), and more recently Adrián Encinas Salamanca (2017), provided the most detailed even tough partial attempts of systematization, and designed historical reconstructions by considering specific periods of time, film lengths or the use of stop-motion as special effect rather than an animation technique. This article provides another partial historical reconstruction of the evolution of stop-motion and outlines the main events that occurred in the development of this technique, following criteria based on the innovations in the technology of materials and manufacturing processes that have influenced the fabrication of puppets until the present day. The systematization follows a chronological order and takes into account events that changed the technique of a puppets’ manufacturing process as a consequence of the use of either new fabrication processes or materials. Starting from the accident that made the French film-pioneer Georges Méliès discover the trick of the replacement technique at the end of the nineteenth century, the reconstruction goes through 120 years of experiments and films. “Build up” puppets fabricated by the Russian puppet animator Ladislaw Starevicz with insect exoskeletons, the use of clay puppets and the innovations introduced by LAIKA entertainment in the last decade such as Stereoscopic photography and the 3D computer printed replacement pieces, and then the increasing influence of digital technologies in the process of puppet fabrication are some of the main considered events. Technology transfers, new materials’ features, innovations in the way of animating puppets, are the main aspects through which this historical analysis approaches the previously mentioned events. This short analysis is supposed to remind and demonstrate that stop-motion animation is an interdisciplinary occasion of both artistic expression and technological experimentation, and that its evolution and aesthetic is related to cultural, geographical and technological issues. Lastly, if the technology of materials and processes is a constantly evolving field, what future can be expected for this cinematographic technique? The article ends with this open question and without providing an answer it implicitly states the role of stop-motion as a driving force for innovations that come from other fields and are incentivized by the needs of this specific sector

    ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing

    Full text link
    We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image. To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek image realism by operating in the geometric warp parameter space. In particular, we exploit an iterative STN warping scheme and propose a sequential training strategy that achieves better results compared to naive training of a single generator. One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. We demonstrate our approach in two applications: (1) visualizing how indoor furniture (e.g. from product images) might be perceived in a room, (2) hallucinating how accessories like glasses would look when matched with real portraits.Comment: Accepted to CVPR 2018 (website & code: https://chenhsuanlin.bitbucket.io/spatial-transformer-GAN/

    Neural Face Editing with Intrinsic Image Disentangling

    Full text link
    Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other --- a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on "in-the-wild" images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.Comment: CVPR 2017 ora

    Little star\u27s journey: A Motion graphics work about the value of life

    Get PDF
    Little Star\u27s Journey is a short animated story expressing my personal explanation of the value of human life. Life is a process of learning through a mix of happiness and suffering. The essential value is to help one to become a better person who can inspire others with wisdom and compassion. Through Little Star\u27s adventure from sea to the sky, he meets different kinds of things. Some look good, and some look bad. However, those all are the lessons from which he learns to become a real star lighting up others in the sky. The project concerns the combination of practicing motion graphics design theories and experimenting current computer graphics integration technologies. The final presentation exhibits a 2.5 minute visual story in an imagined world of my mind\u27s eye

    Gazedirector: Fully articulated eye gaze redirection in video

    Get PDF
    We present GazeDirector, a new approach for eye gaze redirection that uses model-fitting. Our method first tracks the eyes by fitting a multi-part eye region model to video frames using analysis-by-synthesis, thereby recovering eye region shape, texture, pose, and gaze simultaneously. It then redirects gaze by 1) warping the eyelids from the original image using a model-derived flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the output image in a photorealistic manner. GazeDirector allows us to change where people are looking without person-specific training data, and with full articulation, i.e. we can precisely specify new gaze directions in 3D. Quantitatively, we evaluate both model-fitting and gaze synthesis, with experiments for gaze estimation and redirection on the Columbia gaze dataset. Qualitatively, we compare GazeDirector against recent work on gaze redirection, showing better results especially for large redirection angles. Finally, we demonstrate gaze redirection on YouTube videos by introducing new 3D gaze targets and by manipulating visual behavior

    Storm: An Exploration of Home Through Painterly Animation

    Get PDF
    Storm is an animated short film that follows a family of Florida Sandhill Cranes. The family experiences a storm that destroys their nest and separates them from their smallest chick. The reunion of the cranes, at the end of the film, symbolizes that the sense of home consists of the members of the family, and not the physical place. The film combines digital two-dimensional (2D) animation with traditional paint on glass animation to create visual contrast between the cranes and the storm. The stylistic choices that I have made include showcasing paint strokes, going with a lineless and painterly animation style, using colour to highlight emotions, and animal characters that have anthropomorphic expressions. The fluidity of the paint strokes reiterates the naturalistic story of the cranes. Allowing the medium to come through strongly in the final animation is a quality that I strived for in my thesis short film. I want the paint strokes to be a focus, not hidden within the animation

    Considerations for Creating a Believable Creature for the Short Film Li Fe

    Get PDF
    This thesis illuminates the specific methods undertaken to achieve of a realistic computer-animated creature that consumes light and dwells in a cave. The animation short, Li Fe, contains seven such creatures whose anatomy required various production techniques to achieve a believable appearance appealing to viewers in a strong way. In most animations all areas of production need to be optimized for fast rendering as well as easily adaptable to change. The modeling, texturing, shading and lighting methods for Li Fe underwent such optimization to achieve a truly believable creature. The look of the creatures was made possible by relying on the viewer\u27s real-world experiences with anatomy of both humans and animals. The modeling of the creature resulted in a highly detailed model, which was then optimized with a Zbrush plug-in for efficiency in future phases of the production. The texturing and shading of the creatures was implemented using a multi-layered process to achieve maximum detail and customization when desired. Lighting the creatures employed Maya and Nuke to achieve the controlled look of a light source glowing within. The end result of the methods used was a production that was easily able to adapt to change and a believable creature to which audiences will be able to better relate

    The Blanket: Play for social and creative development

    Get PDF
    Seeing that many educational toys available today are focusing too much on the toy itself and not the play possibilities it generates, this thesis attempts to put the importance back on the activity of play. It proposes to bring play back to the basics by focusing mainly on face to face interactions. This thesis discusses toys or playthings that help develop creative and social skills in children through parent-child oriented play. This is done through the encouragement of pretend play and storytelling as the basic activities

    Multispace behavioral model for face-based affective social agents

    Get PDF
    This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios
    • …
    corecore