46 research outputs found

    Toward a computational theory for motion understanding: The expert animators model

    Get PDF
    Artificial intelligence researchers claim to understand some aspect of human intelligence when their model is able to emulate it. In the context of computer graphics, the ability to go from motion representation to convincing animation should accordingly be treated not simply as a trick for computer graphics programmers but as important epistemological and methodological goal. In this paper we investigate a unifying model for animating a group of articulated bodies such as humans and robots in a three-dimensional environment. The proposed model is considered in the framework of knowledge representation and processing, with special reference to motion knowledge. The model is meant to help setting the basis for a computational theory for motion understanding applied to articulated bodies

    Curved Path Walking

    Get PDF
    Research on biped locomotion has focused on sagittal plane walking in which the stepping path is a straight line. Because a walking path is often curved in a three dimensional environment, a 3D locomotion subsystem is required to provide general walking animation. In building a 3D locomotion subsystem, we tried to utilize pre-existing straight path (2D) systems. The movement of the center of the body is important in determining the amount of banking and turning. The center site is defined to be the midpoint between the two hip joints. An algorithm to obtain the center site trajectory that realizes the given curved walking path is presented. From the position and orientation of the center site, we compute stance and swing leg configuration as well as the upper body configuration, based on the underlying 2D system

    Jack: A Toolkit for Manipulating Articulated Figures

    Get PDF
    The problem of positioning and manipulating three dimensional articulated figures is often handled by ad hoc techniques which are cumbersome to use. In this paper, we describe a system which provides a consistent and flexible user interface to a complex representation for articulated figures in a 3D environment. Jack is a toolkit of routines for displaying and manipulating complex geometric figures, and it provides a method of interactively manipulating arbitrary homogeneous transformations with a mouse. These transformations may specify the position and orientation of figures within a scene or the joint transformations within the figures themselves. Jack combines this method of 3D input with a flexible and informative screen management facility to provide a user-friendly interface for manipulating three dimensional objects

    A Conceptual Image-Based Data Glove for Computer-Human Interaction

    Get PDF
    Data gloves are devices equipped with sensors that capture the movements of the hand of the user in order to select or manipulate objects in a virtual world. Data gloves were introduced three decades ago and since then have been used in many 3D interaction techniques. However, good data gloves are too expensive and only a few of them can perceive the full set of hand movements. In this paper we describe the design of an image-based data glove (IBDG) prototype suitable for finger sensible applications, like virtual objects manipulation and interaction approaches. The proposed device uses a camera to track visual markers at finger tips, and a software module tocompute the position of each finger tip and its joints in real-time. To evaluate our concept, we have built a prototype and tested it with 15 volunteers. We also discuss how to improve the engineering of the prototype, how to turn it into a low cost interaction device, as well as other relevant issues about this original concept

    Curved Path Human Locomotion That Handles Anthropometrical Variety

    Get PDF
    Human locomotion simulation along a curved path is presented. The process adds a small constant cost (O(1)) to any pre-existing straight line walking algorithm. The input curve is processed by the foot print generator to produce a foot print sequence. The resulting sequence is scanned by the walking motion generator that actually generates the poses of the walking that realizes such foot prints. The two primitives INITIALIZE_STEP and ADVANCE_STEP are used for walking motion generation. INITIALIZE_STEP is activated with the input parameters walker, next_foot_print, left_or_right, and step_duration, just before each step to precompute the trajectories of the center of the body and the ankles. ADVANCE_STEP is called with a normalized time to generate the actual pose at that moment. The normalized time is a logical time, covering zero to one during a complete step

    Motion analysis report

    Get PDF
    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations

    Real-Time Virtual Humans

    Get PDF
    The last few years have seen great maturation in the computation speed and control methods needed to portray 30 virtual humans suitable for real interactive applications. We first describe the state of the art, then focus on the particular approach taken at the University of Pennsylvania with the Jack system. Various aspects of real-time virtual humans are considered, such as appearance and motion, interactive control, autonomous action, gesture, attention, locomotion, and multiple individuals. The underlying architecture consists of a sense-control-act structure that permits reactive behaviors to be locally adaptive to the environment, and a PaT-Net parallel finite-state machine controller that can be used to drive virtual humans through complex tasks. We then argue for a deep connection between language and animation and describe current efforts in linking them through two systems: the Jack Presenter and the JackMOO extension to lambdaM00. Finally, we outline a Parameterized Action Representation for mediating between language instructions and animated actions
    corecore