443 research outputs found

    Creating Procedural Animation for the Terrestrial Locomotion of Tentacled Digital Creatures

    Get PDF
    This thesis presents a prototype system to develop procedural animation for the goal-directed terrestrial locomotion of tentacled digital creatures. Creating locomotion for characters with multiple highly deformable limbs is time and labor intensive. This prototype system presents an interactive real-time physically-based solution to procedurally create tentacled creatures and simulate their goal-directed movement about an environment. Artistic control over both the motion path of the creature and the localized behavior of the tentacles is maintained. This system functions as a stand-alone simulation and a tool has been created to integrate it into production software. Applications include use in visual effects and animation where generalized behavior of tentacled creatures is required

    Drawing from motion capture : developing visual languages of animation

    Get PDF
    The work presented in this thesis aims to explore novel approaches of combining motion capture with drawing and 3D animation. As the art form of animation matures, possibilities of hybrid techniques become more feasible, and crosses between traditional and digital media provide new opportunities for artistic expression. 3D computer animation is used for its keyframing and rendering advancements, that result in complex pipelines where different areas of technical and artistic specialists contribute to the end result. Motion capture is mostly used for realistic animation, more often than not for live-action filmmaking, as a visual effect. Realistic animated films depend on retargeting techniques, designed to preserve actors performances with a high degree of accuracy. In this thesis, we investigate alternative production methods that do not depend on retargeting, and provide animators with greater options for experimentation and expressivity. As motion capture data is a great source for naturalistic movements, we aim to combine it with interactive methods such as digital sculpting and 3D drawing. As drawing is predominately used in preproduction, in both the case of realistic animation and visual effects, we embed it instead to alternative production methods, where artists can benefit from improvisation and expression, while emerging in a three-dimensional environment. Additionally, we apply these alternative methods for the visual development of animation, where they become relevant for the creation of specific visual languages that can be used to articulate concrete ideas for storytelling in animation

    DIGITIZING THE CORPOREAL: THE AFFECT OF MEDIATIZED ELEMENTS IN THEATRICAL PERFORMANCE

    Get PDF
    This paper explores the affect of digital media in live performance. The research is generated from work with integrated digital media in Carol Ann Duffy\u27s 2015 adaptation of Everyman as well as Samuel Beckett\u27s radio play Cascando. Through experimentation and implementation of motion tracking digital elements and actor-manipulated sonic and visual digital media achieved with MIDI and OSC mapping, I explore the embodiment of performances by actors when their various characters are represented on the physical as well as the digital stage simultaneously. The research interrogates digital media in storytelling when actors can manipulate imagery and language in real time on stage and what that means for character and plot in live theatre, the embodiment of physical and virtual representation of character, and evolving storytelling through digital means

    Generative Disco: Text-to-Video Generation for Music Visualization

    Full text link
    Visuals are a core part of our experience of music, owing to the way they can amplify the emotions and messages conveyed through the music. However, creating music visualization is a complex, time-consuming, and resource-intensive process. We introduce Generative Disco, a generative AI system that helps generate music visualizations with large language models and text-to-image models. Users select intervals of music to visualize and then parameterize that visualization by defining start and end prompts. These prompts are warped between and generated according to the beat of the music for audioreactive video. We introduce design patterns for improving generated videos: "transitions", which express shifts in color, time, subject, or style, and "holds", which encourage visual emphasis and consistency. A study with professionals showed that the system was enjoyable, easy to explore, and highly expressive. We conclude on use cases of Generative Disco for professionals and how AI-generated content is changing the landscape of creative work

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Functional Animation:Interactive Animation in Digital Artifacts

    Get PDF

    Interactive simulation and rendering of fluids on graphics hardware

    Get PDF
    Computational uid dynamics can be used to reproduce the complex motion of fluids for use in computer graphics, but the simulation and rendering are both highly computationally intensive. In the past performing these tasks on the CPU could take many minutes per frame, especially for large scale scenes at high levels of detail, which limited their usage to offline applications such as in film and media. However, using the massive parallelism of GPUs, it is nowadays possible to produce uid visual effects in real time for interactive applications such as games. We present such an interactive simulation using the CUDA GPU computing environment and OpenGL graphics API. Smoothed Particle Hydrodynamics (SPH) is a popular particle-based fluid simulation technique that has been shown to be well suited to acceleration on the GPU. Our work extends an existing GPU-based SPH implementation by incorporating rigid body interaction and rendering. Solid objects are represented using particles to accumulate hydrodynamic forces from surrounding fluid, while motion and collision handling are handled by the Bullet Physics library on the CPU. Our system demonstrates two-way coupling with multiple objects floating, displacing fluid and colliding with each other. For rendering we compare the performance and memory consumption of two approaches, splatting and raycasting, we also describe the visual characteristics of each. In our evaluation we consider a target of between 24 and 30 fps to be sufficient for smooth interaction and aim to determine the performance impact of our new features. We begin by establishing a performance baseline and find that the original system runs smoothly up to 216,000 fluid particles but after introducing rendering this drops to 27,000 particles with the rendering taking up the majority of the frame time in both techniques. We find that the most significant limiting factor to splatting performance to be the onscreen area occupied by fluid while the raycasting performance is primarily determined by the resolution of the 3D texture used for sampling. Finally we find that performing solid interaction on the CPU is a viable approach that does not introduce significant overhead unless solid particles vastly outnumber fluid ones

    Towards an interactive environment for the performance of Dubstep music

    Get PDF
    This Masters by Research project explores the integration of different concepts relating to the presence of the human body in Dubstep music performance. Three intended performance systems propose that the body is the logical site for the interactive control of live Dubstep music. The physicality and gestures of instrumentalists, choreographed dancers, and audience members will be examined in order to develop new and exciting ways to perform this genre in a live setting. The systems take on a three-tiered hierarchical approach on two levels in regards to the extraction of gestural information from human body movements, as well as in regards to the importance – and length – of musical phenomena and parameters that are under control. The characteristics of Dubstep music are defined and maintained within each interactive music system. A model for this each proposed system will be examined, including discussion of the technology and methodology employed in order to apply the two hierarchies and create the interactive environment

    Record : from signal to atmosphere, and the spaces between silence and noise

    Get PDF
    Record is both a noun and a verb. Its meaning shifts through pronunciation, beginning with our cognitive interpretation and then emerging as a translation that we project from our mouths. This thesis book is an artifact, or record, of the past two years of artistic inquiry. My work, however, lives through movement in time—it records my impulses. This thesis is merely an open archive. A range of stimuli competes for our conscious attention. The signals we choose to notice emerge from the periphery of our atmosphere, which includes everything from noise to silence. I use design to amplify and alter signals and background noise to create unexpected and mysterious new experiences from the everyday. Leveraging the visual, aural and kinetic, my work celebrates the variation of individual perception, while bringing awareness to the exchanges we have with other people and species in the shared environments we inhabit. The works in this thesis employ transposition between digital and analog spaces, cinematic storytelling through installation and documentation, and narrative shifts while moving in and out of different mediums. I use film and audio to capture these experiences. I grasp at the intangible, invisible forces of daily life—sound, light and the fleeting essence of movement and conversation
    • …
    corecore