3,507 research outputs found

    Using humanoid robots to study human behavior

    Get PDF
    Our understanding of human behavior advances as our humanoid robotics work progresses-and vice versa. This team's work focuses on trajectory formation and planning, learning from demonstration, oculomotor control and interactive behaviors. They are programming robotic behavior based on how we humans “program” behavior in-or train-each other

    Text2Action: Generative Adversarial Synthesis from Language to Action

    Full text link
    In this paper, we propose a generative model which learns the relationship between language and human action in order to generate a human action sequence given a sentence describing human behavior. The proposed generative model is a generative adversarial network (GAN), which is based on the sequence to sequence (SEQ2SEQ) model. Using the proposed generative network, we can synthesize various actions for a robot or a virtual agent using a text encoder recurrent neural network (RNN) and an action decoder RNN. The proposed generative network is trained from 29,770 pairs of actions and sentence annotations extracted from MSR-Video-to-Text (MSR-VTT), a large-scale video dataset. We demonstrate that the network can generate human-like actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence. Results show that the proposed generative network correctly models the relationship between language and action and can generate a diverse set of actions from the same sentence.Comment: 8 pages, 10 figure

    Towards automatic extraction of expressive elements from motion pictures : tempo

    Full text link
    This paper proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high level semantics of stories portrayed, thus enabling better video annotation and interpretation systems. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step towards demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for four full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful attribute in its own right and a promising component of semantic constructs such as tone or mood of a film

    Haptic dancing: human performance at haptic decoding with a vocabulary

    Get PDF
    The inspiration for this study is the observation that swing dancing involves coordination of actions between two humans that can be accomplished by pure haptic signaling. This study implements a leader-follower dance to be executed between a human and a PHANToM haptic device. The data demonstrates that the participants' understanding of the motion as a random sequence of known moves informs their following, making this vocabulary-based interaction fundamentally different from closed loop pursuit tracking. This robot leader does not respond to the follower's movement other than to display error from a nominal path. This work is the first step in an investigation of the successful haptic coordination between dancers, which will inform a subsequent design of a truly interactive robot leader

    Expressivity in Natural and Artificial Systems

    Full text link
    Roboticists are trying to replicate animal behavior in artificial systems. Yet, quantitative bounds on capacity of a moving platform (natural or artificial) to express information in the environment are not known. This paper presents a measure for the capacity of motion complexity -- the expressivity -- of articulated platforms (both natural and artificial) and shows that this measure is stagnant and unexpectedly limited in extant robotic systems. This analysis indicates trends in increasing capacity in both internal and external complexity for natural systems while artificial, robotic systems have increased significantly in the capacity of computational (internal) states but remained more or less constant in mechanical (external) state capacity. This work presents a way to analyze trends in animal behavior and shows that robots are not capable of the same multi-faceted behavior in rich, dynamic environments as natural systems.Comment: Rejected from Nature, after review and appeal, July 4, 2018 (submitted May 11, 2018
    corecore