2,075 research outputs found

    Pose-Timeline for Propagating Motion Edits

    Get PDF
    Motion editing often requires repetitive operations for modifying similar action units to give a similar effector impression. This paper proposes a system for efficiently and flexibly editing the sequence of iterative actionsby a few intuitive operations. Our system visualizes a motion sequence on a summary timeline with editablepose-icons, and drag-and-drop operations on the timeline enable intuitive controls of temporal properties of themotion such as timing, duration, and coordination. This graphical interface is also suited to transfer kinematicaland temporal features between two motions through simple interactions with a quick preview of the resultingposes. Our method also integrates the concept of edit propagation by which the manual modification of one actionunit is automatically transferred to the other units that are robustly detected by similarity search technique. Wedemonstrate the efficiency of our pose-timeline interface with a propagation mechanism for the timing adjustmentof mutual actions and for motion synchronization with a music sequence

    Using music and motion analysis to construct 3D animations and visualisations

    Get PDF
    This paper presents a study into music analysis, motion analysis and the integration of music and motion to form creative natural human motion in a virtual environment. Motion capture data is extracted to generate a motion library, this places the digital motion model at a fixed posture. The first step in this process is to configure the motion path curve for the database and calculate the possibility that two motions were sequential through the use of a computational algorithm. Every motion is then analysed for the next possible smooth movement to connect to, and at the same time, an interpolation method is used to create the transitions between motions to enable the digital motion models to move fluently. Lastly, a searching algorithm sifts for possible successive motions from the motion path curve according to the music tempo. It was concluded that the higher ratio of rescaling a transition, the lower the degree of natural motio

    Using music and motion analysis to construct 3D animations and visualizations

    Get PDF
    This paper presents a study into music analysis, motion analysis and the integration of music and motion to form creative natural human motion in a virtual environment. Motion capture data is extracted to generate a motion library, this places the digital motion model at a fixed posture. The first step in this process is to configure the motion path curve for the database and calculate the possibility that two motions were sequential through the use of a computational algorithm. Every motion is then analysed for the next possible smooth movement to connect to, and at the same time, an interpolation method is used to create the transitions between motions to enable the digital motion models to move fluently. Lastly, a searching algorithm sifts for possible successive motions from the motion path curve according to the music tempo. It was concluded that the higher ratio of rescaling a transition, the lower the degree of natural motion

    3DCGキャラクタの表現の改善法と実時間操作に関する研究

    Get PDF
    早大学位記番号:新8176早稲田大

    Generating audio-responsive video images in real-time for a live symphony performance

    Get PDF
    Multimedia performances, uniting music and interactive images, are a unique form of entertainment that has been explored by artists for centuries. This audio-visual combination has evolved from rudimentary devices generating visuals for single instruments to cutting-edge video image productions for musical groups of all sizes. Throughout this evolution, a common goal has been to create real-time, audio-responsive visuals that accentuate the sound and enhance the performance. This paper explains the creation of a project that produces real-time, audioresponsive and artist interactive visuals to accompany a live musical performance by a symphony orchestra. On April 23, 2006, this project was performed live with the Brazos Valley Symphony Orchestra. The artist, onstage during the performance, controlled the visual presentation through a user interactive, custom computer program. Using the power of current visualization technology, this digital program was written to manipulate and synchronize images to a musical work. This program uses pre-processed video footage chosen to reflect the energy of the music. The integration of the video imagery into the program became a reiterative testing process that allowed for important adjustments throughout the visual creation process. Other artists are encouraged to use this as a guideline for creating their own audio-visual projects exploring the union of visuals and music

    Automated Analysis of Synchronization in Human Full-body Expressive Movement

    Get PDF
    The research presented in this thesis is focused on the creation of computational models for the study of human full-body movement in order to investigate human behavior and non-verbal communication. In particular, the research concerns the analysis of synchronization of expressive movements and gestures. Synchronization can be computed both on a single user (intra-personal), e.g., to measure the degree of coordination between the joints\u2019 velocities of a dancer, and on multiple users (inter-personal), e.g., to detect the level of coordination between multiple users in a group. The thesis, through a set of experiments and results, contributes to the investigation of both intra-personal and inter-personal synchronization applied to support the study of movement expressivity, and improve the state-of-art of the available methods by presenting a new algorithm to perform the analysis of synchronization

    Innovative Digital Storytelling with AIGC: Exploration and Discussion of Recent Advances

    Full text link
    Digital storytelling, as an art form, has struggled with cost-quality balance. The emergence of AI-generated Content (AIGC) is considered as a potential solution for efficient digital storytelling production. However, the specific form, effects, and impacts of this fusion remain unclear, leaving the boundaries of AIGC combined with storytelling undefined. This work explores the current integration state of AIGC and digital storytelling, investigates the artistic value of their fusion in a sample project, and addresses common issues through interviews. Through our study, we conclude that AIGC, while proficient in image creation, voiceover production, and music composition, falls short of replacing humans due to the irreplaceable elements of human creativity and aesthetic sensibilities at present, especially in complex character animations, facial expressions, and sound effects. The research objective is to increase public awareness of the current state, limitations, and challenges arising from combining AIGC and digital storytelling.Comment: Project page: https://lsgm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences
    corecore