6 research outputs found

    Interactive locomotion animation using path planning

    Full text link

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Planning Plausible Human Motions for Navigation and Collision Avoidance

    Get PDF
    This thesis investigates the plausibility of computer-generated human motions for navigation and collision avoidance. To navigate a human character through obstacles in an virtual environment, the problem is often tackled by finding the shortest possible path to the destination with smoothest motions available. This is because such solution is regarded as cost-effective and free-flowing in that it implicitly minimises the biomechanical efforts and potentially precludes anomalies such as frequent and sudden change of behaviours, and hence more plausible to human eyes. Previous research addresses this problem in two stages: finding the shortest collision-free path (motion planning) and then fitting motions onto this path accordingly (motion synthesis). This conventional approach is not optimal because the decoupling of these two stages introduces two problems. First, it forces the motion-planning stage to deliberately simplify the collision model to avoid obstacles. Secondly, it over-constrains the motion-synthesis stage to approximate motions to a sub-optimal trajectory. This often results in implausible animations that travel along erratic long paths while making frequent and sudden behaviour changes. In this research, I argue that to provide more plausible navigation and collision avoidance animation, close-proximity interaction with obstacles is crucial. To address this, I propose to combine motion planning and motion synthesis to search for shorter and smoother solutions. The intuition is that by incorporating precise collision detection and avoidance with motion capture database queries, we will be able to plan fine-scale interactions between obstacles and moving crowds. The results demonstrate that my approach can discover shorter paths with steadier behaviour transitions in scene navigation and crowd avoidance. In addition, this thesis attempts to propose a set of metrics that can be used to evaluate the plausibility of computer-generated navigation animations

    On-line locomotion synthesis for virtual humans

    Get PDF
    Ever since the development of Computer Graphics in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing fascination for Computer Animation. This specific field of Computer Graphics gathers numerous techniques, especially for the animation of characters or virtual humans in movies and video games. To create such high-fidelity animations, a particular interest has been dedicated to motion capture, a technology which allows to record the 3D movement of a live performer. The resulting realism motion is convincing. However, this technique offers little control to animators, as the recorded motion can only be played back. Recently, many advances based on motion capture have been published, concerning slight but precise modifications of an original motion or the parameterization of large motion databases. The challenge consists in combining motion realism with an intuitive on-line motion control, while preserving real-time performances. In the first part of this thesis, we would like to add a brick in the wall of motion parameterization techniques based on motion capture, by introducing a generic motion modeling for locomotion and jump activities. For this purpose, we simplify the motion representation using a statistical method in order to facilitate the elaboration of an efficient parametric model. This model is structured in hierarchical levels, allowing an intuitive motion synthesis with high-level parameters. In addition, we present a space and time normalization process to adapt our model to characters of various sizes. In the second part, we integrate this motion modeling in an animation engine, thus allowing for the generation of a continuous stream of motion for virtual humans. We provide two additional tools to improve the flexibility of our engine. Based on the concept of motion anticipation, we first introduce an on-line method for detecting and enforcing foot-ground constraints. Hence, a straight line walking motion can be smoothly modified to a curved one. Secondly, we propose an approach for the automatic and coherent synthesis of transitions from locomotion to jump (and inversely) motions, by taking into account their respective properties. Finally, we consider the interaction of a virtual human with its environment. Given initial and final conditions set on the locomotion speed and foot positions, we propose a method which computes the corresponding trajectory. To illustrate this method, we propose a case study which mirrors as closely as possible the behavior of a human confronted with an obstacle: at any time, obstacles may be interactively created in front of a moving virtual human. Our method computes a trajectory allowing the virtual human to precisely jump over the obstacle in an on-line manner

    Planning human walk in virtual environments

    No full text
    International audienceThis paper presents a method for animating human characters, especially dedicated to walk planning problems. The method is integrated in a randomized motion planning scheme, including a steering method dedicated to human walk. This steering method integrates a character motion controller assuming realistic animations. The navigation of the character through a virtual environment is modeled as a composition of BĂ©zier curves. The controller is based on motion capture data editing techniques. This approach satisfies some essential computer graphics criteria: a realistic result, a low response time, a collision-free motion in possibly constrained 3D environments. The approach has been implemented and successfully demonstrated on several examples

    Planning Human Walk in Virtual Environments

    No full text
    This paper presents a method for animating human characters, especially dedicated to walk planning problems. The method is integrated in a randomized motion planning scheme, including a steering method dedicated to human walk. This steering method integrates a character motion controller assuming realistic animations. The navigation of the character through the virtual environment is modeled as a composition of Bezier curves. The controller is based on motion capture data editing techniques. This approach satisfies some essential computer graphics criteria : realistic result, low response time, collision-free motion in possibly constrained 3D environments. The approach has been implemented and successfully demonstrated on several examples
    corecore