13 research outputs found

    Learning Finite State Machine Controllers from Motion Capture Data

    Get PDF
    With characters in computer games and interactive media increasingly being based on real actors, the individuality of an actor's performance should not only be reflected in the appearance and animation of the character but also in the Artificial Intelligence that governs the character's behavior and interactions with the environment. Machine learning methods applied to motion capture data provide a way of doing this. This paper presents a method for learning the parameters of a Finite State Machine controller. The method learns both the transition probabilities of the Finite State Machine and also how to select animations based on the current state

    Expressivity in Natural and Artificial Systems

    Full text link
    Roboticists are trying to replicate animal behavior in artificial systems. Yet, quantitative bounds on capacity of a moving platform (natural or artificial) to express information in the environment are not known. This paper presents a measure for the capacity of motion complexity -- the expressivity -- of articulated platforms (both natural and artificial) and shows that this measure is stagnant and unexpectedly limited in extant robotic systems. This analysis indicates trends in increasing capacity in both internal and external complexity for natural systems while artificial, robotic systems have increased significantly in the capacity of computational (internal) states but remained more or less constant in mechanical (external) state capacity. This work presents a way to analyze trends in animal behavior and shows that robots are not capable of the same multi-faceted behavior in rich, dynamic environments as natural systems.Comment: Rejected from Nature, after review and appeal, July 4, 2018 (submitted May 11, 2018

    Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems

    Full text link
    As robotic systems are moved out of factory work cells into human-facing environments questions of choreography become central to their design, placement, and application. With a human viewer or counterpart present, a system will automatically be interpreted within context, style of movement, and form factor by human beings as animate elements of their environment. The interpretation by this human counterpart is critical to the success of the system's integration: knobs on the system need to make sense to a human counterpart; an artificial agent should have a way of notifying a human counterpart of a change in system state, possibly through motion profiles; and the motion of a human counterpart may have important contextual clues for task completion. Thus, professional choreographers, dance practitioners, and movement analysts are critical to research in robotics. They have design methods for movement that align with human audience perception, can identify simplified features of movement for human-robot interaction goals, and have detailed knowledge of the capacity of human movement. This article provides approaches employed by one research lab, specific impacts on technical and artistic projects within, and principles that may guide future such work. The background section reports on choreography, somatic perspectives, improvisation, the Laban/Bartenieff Movement System, and robotics. From this context methods including embodied exercises, writing prompts, and community building activities have been developed to facilitate interdisciplinary research. The results of this work is presented as an overview of a smattering of projects in areas like high-level motion planning, software development for rapid prototyping of movement, artistic output, and user studies that help understand how people interpret movement. Finally, guiding principles for other groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for the 21st Century)" http://www.mdpi.com/journal/arts/special_issues/Machine_Artis

    Customizing by Doing for Responsive Video Game Characters

    Get PDF
    This paper presents a game in which players can customize the behavior of their characters using their own movements while playing the game. Players’ movements are recorded with a motion capture system. The player then labels the movements and uses them as input to a machine learning algorithm that generates a responsive behavior model. This interface supports a more embodied approach to character design that we call “Customizing by Doing”. We present a user study that shows that using their own movements made the users feel more engaged with the game and the design process, due in large part to a feeling of personal ownership of the movement

    An embodied, platform-invariant architecture for robotic spatial commands

    Get PDF
    In contexts such as teleoperation, robot reprogramming, and human-robot-interaction, and neural prosthetics, conveying spatial commands to a robotic platform is often a limiting factor. Currently, many applications rely on joint-angle-by-joint-angle prescriptions. This inherently requires a large number of parameters to be specified by the user that scales with the number of degrees of freedom on a platform, creating high bandwidth requirements for interfaces. This thesis presents an efficient representation of high-level, spatial commands that specifies many joint angles with relatively few parameters based on a spatial architecture. To this end, an expressive command architecture is proposed that allows pose generation of simple motion primitives. In particular, a general method for labeling connected platform linkages, generating a databank of user-specified poses, and mapping between high-level spatial commands and specific platform static configurations are presented. Further, this architecture is platform- invariant where the same high-level, spatial command can have meaning on any platform. This has the particular advantage that our commands have meaning for human movers as well. In order to achieve this, we draw inspiration from Laban/Bartenieff Movement Studies, an embodied taxonomy for movement description. The final architecture is implemented for twenty-six spatial directions on a Rethink Robotics Baxter and an Aldebaran NAO. Two user studies have been conducted to validate the effectiveness of the proposed framework. Lastly, a workload metric is proposed to quantitative assess the usability of a machine interface

    Augmenting the Creation of 3D Character Motion By Learning from Video Data

    Get PDF
    When it comes to character motions, especially articulated character animation, the majority of efforts are spent on accurately capturing the low level and high level action styles. Among the many techniques which have evolved over the years, motion capture (mocap) and key frame animations are the two popular choices. Both techniques are capable of capturing the low level and high level action styles of a particular individual, but at great expense in terms of the human effort involved. In this thesis, we make use of performance data in video format to augment the process of character animation, considerably decreasing human effort for both style preservation and motion regeneration. Two new methods, one for high-level and another for low-level character animation, which are based on learning from video data to augment the motion creation process, constitute the major contribution of this research. In the first, we take advantage of the recent advancements in the field of action recognition to automatically recognize human actions from video data. High level action patterns are learned and captured using Hidden Markov Models (HMM) to generate action sequences with the same pattern. For the low level action style, we present a completely different approach that utilizes user-identified transition frames in a video to enhance the transition construction in the standard motion graph technique for creating smooth action sequences. Both methods have been implemented and a number of results illustrating the concept and applicability of the proposed approach are presented

    Measuring, analysing and artificially generating head nodding signals in dyadic social interaction

    Get PDF
    Social interaction involves rich and complex behaviours where verbal and non-verbal signals are exchanged in dynamic patterns. The aim of this thesis is to explore new ways of measuring and analysing interpersonal coordination as it naturally occurs in social interactions. Specifically, we want to understand what different types of head nods mean in different social contexts, how they are used during face-to-face dyadic conversation, and if they relate to memory and learning. Many current methods are limited by time-consuming and low-resolution data, which cannot capture the full richness of a dyadic social interaction. This thesis explores ways to demonstrate how high-resolution data in this area can give new insights into the study of social interaction. Furthermore, we also want to demonstrate the benefit of using virtual reality to artificially generate interpersonal coordination to test our hypotheses about the meaning of head nodding as a communicative signal. The first study aims to capture two patterns of head nodding signals – fast nods and slow nods – and determine what they mean and how they are used across different conversational contexts. We find that fast nodding signals receiving new information and has a different meaning than slow nods. The second study aims to investigate a link between memory and head nodding behaviour. This exploratory study provided initial hints that there might be a relationship, though further analyses were less clear. In the third study, we aim to test if interactive head nodding in virtual agents can be used to measure how much we like the virtual agent, and whether we learn better from virtual agents that we like. We find no causal link between memory performance and interactivity. In the fourth study, we perform a cross-experimental analysis of how the level of interactivity in different contexts (i.e., real, virtual, and video), impacts on memory and find clear differences between them

    The Machine as Art/ The Machine as Artist

    Get PDF
    The articles collected in this volume from the two companion Arts Special Issues, “The Machine as Art (in the 20th Century)” and “The Machine as Artist (in the 21st Century)”, represent a unique scholarly resource: analyses by artists, scientists, and engineers, as well as art historians, covering not only the current (and astounding) rapprochement between art and technology but also the vital post-World War II period that has led up to it; this collection is also distinguished by several of the contributors being prominent individuals within their own fields, or as artists who have actually participated in the still unfolding events with which it is concerne
    corecore