4,614 research outputs found
Synthesis of variable dancing styles based on a compact spatiotemporal representation of dance
Dance as a complex expressive form of motion is able to convey emotion, meaning and social idiosyncrasies that opens channels for non-verbal communication, and promotes rich cross-modal interactions with music and the environment. As such, realistic dancing characters may incorporate crossmodal information and variability of the dance forms through compact representations that may describe the movement structure in terms of its spatial and temporal organization. In this paper, we propose a novel method for synthesizing beatsynchronous dancing motions based on a compact topological model of dance styles, previously captured with a motion capture system. The model was based on the Topological Gesture Analysis (TGA) which conveys a discrete three-dimensional point-cloud representation of the dance, by describing the spatiotemporal variability of its gestural trajectories into uniform spherical distributions, according to classes of the musical meter. The methodology for synthesizing the modeled dance traces back the topological representations, constrained with definable metrical and spatial parameters, into complete dance instances whose variability is controlled by stochastic processes that considers both TGA distributions and the kinematic constraints of the body morphology. In order to assess the relevance and flexibility of each parameter into feasibly reproducing the style of the captured dance, we correlated both captured and synthesized trajectories of samba dancing sequences in relation to the level of compression of the used model, and report on a subjective evaluation over a set of six tests. The achieved results validated our approach, suggesting that a periodic dancing style, and its musical synchrony, can be feasibly reproduced from a suitably parametrized discrete spatiotemporal representation of the gestural motion trajectories, with a notable degree of compression
SLoMo: A General System for Legged Robot Motion Imitation from Casual Videos
We present SLoMo: a first-of-its-kind framework for transferring skilled
motions from casually captured "in the wild" video footage of humans and
animals to legged robots. SLoMo works in three stages: 1) synthesize a
physically plausible reconstructed key-point trajectory from monocular videos;
2) optimize a dynamically feasible reference trajectory for the robot offline
that includes body and foot motion, as well as contact sequences that closely
tracks the key points; 3) track the reference trajectory online using a
general-purpose model-predictive controller on robot hardware. Traditional
motion imitation for legged motor skills often requires expert animators,
collaborative demonstrations, and/or expensive motion capture equipment, all of
which limits scalability. Instead, SLoMo only relies on easy-to-obtain
monocular video footage, readily available in online repositories such as
YouTube. It converts videos into motion primitives that can be executed
reliably by real-world robots. We demonstrate our approach by transferring the
motions of cats, dogs, and humans to example robots including a quadruped (on
hardware) and a humanoid (in simulation). To the best knowledge of the authors,
this is the first attempt at a general-purpose motion transfer framework that
imitates animal and human motions on legged robots directly from casual videos
without artificial markers or labels.Comment: accepted at RA-L 2023, with ICRA 2024 optio
Reciprocal Waves: Embodied Intersubjective Communication in Dance/Movement Therapy Practice
In this thesis project, the author proposes a framework of empathic communication in Dance/Movement Therapy (DMT) practice. Based on Franz de Waal’s Russian doll model of empathy, the author explores three traditional phenomena in DMT practice that cultivate empathy and intersubjectivity: Primitive Mirroring; Shared Intention; and Movement Understanding. In each topic, the author extends the investigation into different areas of study in order to illuminate the profound connectedness of human empathic communication. The term Reciprocal Waves highlights the back and forth relationship-building process that occurs daily in dance/movement therapy practice. It is a framework derived from DMT practice that can be applied to all fields that would benefit from promoting empathic human relationships
Recommended from our members
Towards a Smart Drone Cinematographer for Filming Human Motion
Affordable consumer drones have made capturing aerial footage more convenient and accessible. However, shooting cinematic motion videos using a drone is challenging because it requires users to analyze dynamic scenarios while operating the controller. In this thesis, our task is to develop an autonomous drone cinematography system to capture cinematic videos of human motion. We understand the system's filming performance to be influenced by three key components: 1) video quality metric, which measures the aesthetic quality -- the angle, the distance, the image composition -- of the captured video, 2) visual feature, which encapsulates the visual elements that influence the filming style, and 3) camera planning, which is a decision-making model that predicts the next best movement. By analyzing these three components, we designed two autonomous drone cinematography systems using both heuristic-based methods and learning-based methods.For the first system, we designed an Autonomous CinemaTography system -- "ACT" by proposing a viewpoint quality metric focusing on the visibility of the 3D human skeleton of the subject. We expanded the application of human motion analysis and simplified manual control by assisting viewpoint selection using a through-the-lens method. For the second system, we designed an imitation-based system that learns the artistic intention of the cameramen through watching professional aerial videos. We designed a camera planner that analyzes the video contents and previous camera motion to predict future camera motion. Furthermore, we propose a planning framework, which can imitate a filming style by ``seeing" only one single demonstration video of such style. We named it ``one-shot imitation filming." To the best of our knowledge, this is the first work that extends imitation learning to autonomous filming. Experimental results in both simulation and field test exhibit significant improvements over existing techniques and our approach managed to help inexperienced pilots capture cinematic videos
- …