19 research outputs found

    An energy-driven motion planning method for two distant postures

    Get PDF
    In this paper, we present a local motion planning algorithm for character animation. We focus on motion planning between two distant postures where linear interpolation leads to penetrations. Our framework has two stages. The motion planning problem is first solved as a Boundary Value Problem (BVP) on an energy graph which encodes penetrations, motion smoothness and user control. Having established a mapping from the configuration space to the energy graph, a fast and robust local motion planning algorithm is introduced to solve the BVP to generate motions that could only previously be computed by global planning methods. In the second stage, a projection of the solution motion onto a constraint manifold is proposed for more user control. Our method can be integrated into current keyframing techniques. It also has potential applications in motion planning problems in robotics

    Topology-based character motion synthesis

    Get PDF
    This thesis tackles the problem of automatically synthesizing motions of close-character interactions which appear in animations of wrestling and dancing. Designing such motions is a daunting task even for experienced animators as the close contacts between the characters can easily result in collisions or penetrations of the body segments. The main problem lies in the conventional representation of the character states that is based on the joint angles or the joint positions. As the relationships between the body segments are not encoded in such a representation, the path-planning for valid motions to switch from one posture to another requires intense random sampling and collision detection in the state-space. In order to tackle this problem, we consider to represent the status of the characters using the spatial relationship of the characters. Describing the scene using the spatial relationships can ease users and animators to analyze the scene and synthesize close interactions of characters. We first propose a method to encode the relationship of the body segments by using the Gauss Linking Integral (GLI), which is a value that specifies how much the body segments are winded around each other. We present how it can be applied for content-based retrieval of motion data of close interactions, and also for synthesis of close character interactions. Next, we propose a representation called Interaction Mesh, which is a volumetric mesh composed of points located at the joint position of the characters and vertices of the environment. This raw representation is more general compared to the tangle-based representation as it can describe interactions that do not involve any tangling nor contacts. We describe how it can be applied for motion editing and retargeting of close character interaction while avoiding penetration and pass-throughs of the body segments. The application of our research is not limited to computer animation but also to robotics, where making robots conduct complex tasks such as tangling, wrapping, holding and knotting are essential to let them assist humans for the daily life

    A topological extension of movement primitives for curvature modulation and sampling of robot motion

    Get PDF
    The version of record is available online at: https://doi.org/10.1007/s10514-021-09976-7This paper proposes to enrich robot motion data with trajectory curvature information. To do so,we use an approximate implementation of a topological feature named writhe, which measures the curling of a closed curve around itself, and its analog feature for two closed curves, namely the linking number. Despite these features have been established for closed curves, their definition allows for a discrete calculation that is well-defined for non-closed curves and can thus provide information about how much a robot trajectory is curling around a line in space. Such lines can be predefined by a user, observed by vision or, in our case, inferred as virtual lines in space around which the robot motion is curling. We use these topological features to augment the data of a trajectory encapsulated as a Movement Primitive (MP). We propose a method to determine how many virtual segments best characterize a trajectory and then find such segments. This results in a generative model that permits modulating curvature to generate new samples, while still staying within the dataset distribution and being able to adapt to contextual variables.This work has been carried out within the project CLOTHILDE (”CLOTH manIpulation Learning from DEmonstrations”) funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Advanced Grant agreement No 741930). Research at IRI is also supported by the Spanish State Research Agency through the Mar ́ıa de Maeztu Seal of Excellence to IRI MDM-2016-0656Peer ReviewedPostprint (author's final draft

    Humanoid Robotic Manipulation Benchmarking and Bimanual Manipulation Workspace Analysis

    Get PDF
    The growing adoption of robots for new applications has led to the use of robots in human environments for human-like tasks, applications well-suited to humanoid robots as they are designed to move like a human and operate in similar environments. However, a user must decide which robot and control algorithm is best suited to the task, motivating the need for standardized performance comparison through benchmarking. Typical humanoid robotic scenarios in many household and industrial tasks involve manipulation of objects with two hands, bimanual manipulation. Understanding how these can be performed in the humanoid’s workspace is especially challenging due to the highly constrained nature due to grasp and stability requirements, but very important for introducing humanoid robots into human environments for human-like tasks. The first topic this thesis focuses on is benchmarking manipulation for humanoid robotics. The evaluation of humanoid manipulation can be considered for whole-body manipulation, manipulation while standing and remaining balanced, or loco-manipulation, taking steps during manipulation. As part of the EUROBENCH project, which aims to develop a unified benchmarking framework for robotic systems performing locomotion tasks, benchmarks for whole-body manipulation and loco-manipulation are proposed consisting of standardized test beds, comprehensive experimental protocols, and insightful key performance indicators. For each of these benchmarks, partial initial benchmarks are performed to begin evaluating the difference in performance of the University of Waterloo’s REEM- C, “Seven”, using two different motion generation and control strategies. These partial benchmarks showed trade-offs in speed and efficiency for placement accuracy. The second topic of interest is bimanual manipulation workspace analysis of humanoid robots. To evaluate the ability of a humanoid robot to bimanually manipulate a box while remaining balanced, a new metric for combined manipulability-stability is developed based on the volume of the manipulability ellipsoid and the distance of the capture point from the edge of the support polygon. Using this metric, visualizations of the workspace are performed for the following scenarios: when the center of mass of the humanoid has a velocity, manipulating objects of different size and mass, and manipulating objects using various grips. To examine bimanual manipulation with different fixed grasps the manipulation of two different boxes, a broom and a rolling pin are visualized to see how grip affects the feasibility and manipulability-stability quality of a task. Visualizations of REEM-C and TALOS are also performed for a general workspace and a box manipulation task to compare their workspaces as they have different kinematic structures. These visualizations provide a better understanding of how manipulability and stability are impacted in a bimanual manipulation scenario

    Sim2Real Neural Controllers for Physics-based Robotic Deployment of Deformable Linear Objects

    Full text link
    Deformable linear objects (DLOs), such as rods, cables, and ropes, play important roles in daily life. However, manipulation of DLOs is challenging as large geometrically nonlinear deformations may occur during the manipulation process. This problem is made even more difficult as the different deformation modes (e.g., stretching, bending, and twisting) may result in elastic instabilities during manipulation. In this paper, we formulate a physics-guided data-driven method to solve a challenging manipulation task -- accurately deploying a DLO (an elastic rod) onto a rigid substrate along various prescribed patterns. Our framework combines machine learning, scaling analysis, and physical simulations to develop a physics-based neural controller for deployment. We explore the complex interplay between the gravitational and elastic energies of the manipulated DLO and obtain a control method for DLO deployment that is robust against friction and material properties. Out of the numerous geometrical and material properties of the rod and substrate, we show that only three non-dimensional parameters are needed to describe the deployment process with physical analysis. Therefore, the essence of the controlling law for the manipulation task can be constructed with a low-dimensional model, drastically increasing the computation speed. The effectiveness of our optimal control scheme is shown through a comprehensive robotic case study comparing against a heuristic control method for deploying rods for a wide variety of patterns. In addition to this, we also showcase the practicality of our control scheme by having a robot accomplish challenging high-level tasks such as mimicking human handwriting, cable placement, and tying knots.Comment: YouTube video: https://youtu.be/OSD6dhOgyMA?feature=share

    Walking with virtual humans : understanding human response to virtual humanoids' appearance and behaviour while navigating in immersive VR

    Get PDF
    In this thesis, we present a set of studies whose results have allowed us to analyze how to improve the realism, navigation, and behaviour of the avatars in an immersive virtual reality environment. In our simulations, participants must perform a series of tasks and we have analyzed perceptual and behavioural data. The results of the studies have allowed us to deduce what improvements are needed to be incorporated to the original simulations, in order to enhance the perception of realism, the navigation technique, the rendering of the avatars, their behaviour or their animations. The most reliable technique for simulating avatars’ behaviour in a virtual reality environment should be based on the study of how humans behave within the environment. For this purpose, it is necessary to build virtual environments where participants can navigate safely and comfortably with a proper metaphor and, if the environment is populated with avatars, simulate their behaviour accurately. All these aspects together will make the participants behave in a way that is closer to how they would behave in the real world. Besides, the integration of these concepts could provide an ideal platform to develop different types of applications with and without collaborative virtual reality such as emergency simulations, teaching, architecture, or designing. In the first contribution of this thesis, we carried out an experiment to study human decision making during an evacuation. We were interested to evaluate to what extent the behaviour of a virtual crowd can affect individuals' decisions. From the second contribution, in which we studied the perception of realism with bots and humans performing just locomotion or varied animations, we can conclude that the combination of having human-like avatars with animation variety can increase the overall realism of a crowd simulation, trajectories and animation. The preliminary study presented in the third contribution of this thesis showed that realistic rendering of the environment and the avatars do not appear to increase the perception of realism in the participants, which is consistent with works presented previously. The preliminary results in our walk-in-place contribution showed a seamless and natural transition between walk-in-place and normal walk. Our system provided a velocity mapping function that closely resembles natural walk. We observed through a pilot study that the system successfully reduces motion sickness and enhances immersion. Finally, the results of the contribution related to locomotion in collaborative virtual reality showed that animation synchronism and footstep sound of the avatars representing the participants do not seem to have a strong impact in terms of presence and feeling of avatar control. However, in our experiment, incorporating natural animations and footstep sound resulted in smaller clearance values in VR than previous work in the literature. The main objective of this thesis was to improve different factors related to virtual reality experiences to make the participants feel more comfortable in the virtual environment. These factors include the behaviour and appearance of the virtual avatars and the navigation through the simulated space in the experience. By increasing the realism of the avatars and facilitating navigation, high scores in presence are achieved during the simulations. This provides an ideal framework for developing collaborative virtual reality applications or emergency simulations that require participants to feel as if they were in real life.En aquesta tesi, es presenta un conjunt d'estudis els resultats dels quals ens han permès analitzar com millorar el realisme, la navegació i el comportament dels avatars en un entorn de realitat virtual immersiu. En les nostres simulacions, els participants han de realitzar una sèrie de tasques i hem analitzat dades perceptives i de comportament mentre les feien. Els resultats dels estudis ens han permès deduir quines millores són necessàries per a ser incorporades a les simulacions originals, amb la finalitat de millorar la percepció del realisme, la tècnica de navegació, la representació dels avatars, el seu comportament o les seves animacions. La tècnica més fiable per simular el comportament dels avatars en un entorn de realitat virtual hauria de basar-se en l'estudi de com es comporten els humans dins de l¿entorn virtual. Per a aquest propòsit, és necessari construir entorns virtuals on els participants poden navegar amb seguretat i comoditat amb una metàfora adequada i, si l¿entorn està poblat amb avatars, simular el seu comportament amb precisió. Tots aquests aspectes junts fan que els participants es comportin d'una manera més pròxima a com es comportarien en el món real. A més, la integració d'aquests conceptes podria proporcionar una plataforma ideal per desenvolupar diferents tipus d'aplicacions amb i sense realitat virtual col·laborativa com simulacions d'emergència, ensenyament, arquitectura o disseny. En la primera contribució d'aquesta tesi, vam realitzar un experiment per estudiar la presa de decisions durant una evacuació. Estàvem interessats a avaluar en quina mesura el comportament d'una multitud virtual pot afectar les decisions dels participants. A partir de la segona contribució, en la qual estudiem la percepció del realisme amb robots i humans que realitzen només una animació de caminar o bé realitzen diverses animacions, vam arribar a la conclusió que la combinació de tenir avatars semblants als humans amb animacions variades pot augmentar la percepció del realisme general de la simulació de la multitud, les seves trajectòries i animacions. L'estudi preliminar presentat en la tercera contribució d'aquesta tesi va demostrar que la representació realista de l¿entorn i dels avatars no semblen augmentar la percepció del realisme en els participants, que és coherent amb treballs presentats anteriorment. Els resultats preliminars de la nostra contribució de walk-in-place van mostrar una transició suau i natural entre les metàfores de walk-in-place i caminar normal. El nostre sistema va proporcionar una funció de mapatge de velocitat que s'assembla molt al caminar natural. Hem observat a través d'un estudi pilot que el sistema redueix amb èxit el motion sickness i millora la immersió. Finalment, els resultats de la contribució relacionada amb locomoció en realitat virtual col·laborativa van mostrar que el sincronisme de l'animació i el so dels avatars que representen els participants no semblen tenir un fort impacte en termes de presència i sensació de control de l'avatar. No obstant això, en el nostre experiment, la incorporació d'animacions naturals i el so de passos va donar lloc a valors de clearance més petits en RV que treballs anteriors ja publicats. L'objectiu principal d'aquesta tesi ha estat millorar els diferents factors relacionats amb experiències de realitat virtual immersiva per fer que els participants se sentin més còmodes en l'entorn virtual. Aquests factors inclouen el comportament i l'aparença dels avatars i la navegació a través de l'entorn virtual. En augmentar el realisme dels avatars i facilitar la navegació, s'aconsegueixen altes puntuacions en presència durant les simulacions. Això proporciona un marc ideal per desenvolupar aplicacions col·laboratives de realitat virtual o simulacions d'emergència que requereixen que els participants se sentin com si estiguessin en la vida realPostprint (published version

    Topology based representations for motion synthesis and planning

    Get PDF
    Robot motion can be described in several alternative representations, including joint configuration or end-effector spaces. These representations are often used for manipulation or navigation tasks but they are not suitable for tasks that involve close interaction with the environment. In these scenarios, collisions and relative poses of the robot and its surroundings create a complex planning space. To deal with this complexity, we exploit several representations that capture the state of the interaction, rather than the state of the robot. Borrowing notions of topology invariances and homotopy classes, we design task spaces based on winding numbers and writhe for synthesizing winding motion, and electro-static fields for planning reaching and grasping motion. Our experiments show that these representations capture the motion, preserving its qualitative properties, while generalising over finer geometrical detail. Based on the same motivation, we utilise a scale and rotation invariant representation for locally preserving distances, called interaction mesh. The interaction mesh allows for transferring motion between robots of different scales (motion re-targeting), between humans and robots (teleoperation) and between different environments (motion adaptation). To estimate the state of the environment we employ real-time sensing techniques utilizing dense stereo tracking, magnetic tracking sensors and inertia measurements units. We combine and exploit these representations for synthesis and generalization of motion in dynamic environments. The benefit of this method is on problems where direct planning in joint space is extremely hard whereas local optimal control exploiting topology and metric of these novel representations can efficiently compute optimal trajectories. We formulate this approach in the framework of optimal control as an approximate inference problem. This allows for consistent combination of multiple task spaces (e.g. end-effector, joint space and the abstract task spaces we investigate in this thesis). Motion generalization to novel situations and kinematics is similarly performed by projecting motion from abstract representations to joint configuration space. This technique, based on operational space control, allows us to adapt the motion in real time. This process of real-time re-mapping generates robust motion, thus reducing the amount of re-planning.We have implemented our approach as a part of an open source project called the Extensible Optimisation library (EXOTica). This software allows for defining motion synthesis problems by combining task representations and presenting this problem to various motion planners using a common interface. Using EXOTica, we perform comparisons between different representations and different planners to validate that these representations truly improve the motion planning

    Control of objects with a high degree of freedom

    Get PDF
    In this thesis, I present novel strategies for controlling objects with high degrees of freedom for the purpose of robotic control and computer animation, including articulated objects such as human bodies or robots and deformable objects such as ropes and cloth. Such control is required for common daily movements such as folding arms, tying ropes, wrapping objects and putting on clothes. Although there is demand in computer graphics and animation for generating such scenes, little work has targeted these problems. The difficulty of solving such problems are due to the following two factors: (1) The complexity of the planning algorithms: The computational costs of the methods that are currently available increase exponentially with respect to the degrees of freedom of the objects and therefore they cannot be applied for full human body structures, ropes and clothes . (2) Lack of abstract descriptors for complex tasks. Models for quantitatively describing the progress of tasks such as wrapping and knotting are absent for animation generation. In this work, we employ the concept of a task-centric manifold to quantitatively describe complex tasks, and incorporate a bi-mapping scheme to bridge this manifold and the configuration space of the controlled objects, called an object-centric manifold. The control problem is solved by first projecting the controlled object onto the task-centric manifold, then getting the next ideal state of the scenario by local planning, and finally projecting the state back to the object-centric manifold to get the desirable state of the controlled object. Using this scheme, complex movements that previously required global path planning can be synthesised by local path planning. Under this framework, we show the applications in various fields. An interpolation algorithm for arbitrary postures of human character is first proposed. Second, a control scheme is suggested in generating Furoshiki wraps with different styles. Finally, new models and planning methods are given for quantitatively control for wrapping/ unwrapping and dressing/undressing problems

    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

    Full text link
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.Comment: Accepted for EUROGRAPHICS 202
    corecore