9 research outputs found
Motion Imitation Based on Sparsely Sampled Correspondence
Existing techniques for motion imitation often suffer a certain level of
latency due to their computational overhead or a large set of correspondence
samples to search. To achieve real-time imitation with small latency, we
present a framework in this paper to reconstruct motion on humanoids based on
sparsely sampled correspondence. The imitation problem is formulated as finding
the projection of a point from the configuration space of a human's poses into
the configuration space of a humanoid. An optimal projection is defined as the
one that minimizes a back-projected deviation among a group of candidates,
which can be determined in a very efficient way. Benefited from this
formulation, effective projections can be obtained by using sparse
correspondence. Methods for generating these sparse correspondence samples have
also been introduced. Our method is evaluated by applying the human's motion
captured by a RGB-D sensor to a humanoid in real-time. Continuous motion can be
realized and used in the example application of tele-operation.Comment: 8 pages, 8 figures, technical repor
Analyzing Whole-Body Pose Transitions in Multi-Contact Motions
When executing whole-body motions, humans are able to use a large variety of
support poses which not only utilize the feet, but also hands, knees and elbows
to enhance stability. While there are many works analyzing the transitions
involved in walking, very few works analyze human motion where more complex
supports occur.
In this work, we analyze complex support pose transitions in human motion
involving locomotion and manipulation tasks (loco-manipulation). We have
applied a method for the detection of human support contacts from motion
capture data to a large-scale dataset of loco-manipulation motions involving
multi-contact supports, providing a semantic representation of them. Our
results provide a statistical analysis of the used support poses, their
transitions and the time spent in each of them. In addition, our data partially
validates our taxonomy of whole-body support poses presented in our previous
work.
We believe that this work extends our understanding of human motion for
humanoids, with a long-term objective of developing methods for autonomous
multi-contact motion planning.Comment: 8 pages, IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 201
Is hugging a robot weird? Investigating the influence of robot appearance on users' perception of hugging
Humanoid robots are expected to be able to communicate with humans using physical interaction, including hug, which is a common gesture of affection. In order to achieve that, their physical embodiment has to be carefully planned, as a user-friendly design will facilitate interaction and minimise repulsion. In this paper, we investigate the effect of manipulating the visual/tactile appearance of a robot, covering wires and metallic parts with clothes, and the auditory effect by enabling or disabling the connector of the hand. The experiment consists in a hugging interaction between the participants and the humanoid robot ARMAR-IIIb. Results after participation of 24 subjects confirm the positive effect from using clothes to modify the appearance and the negative effect of noise and vibration
Analyzing Whole-Body Pose Transitions in Multi-Contact Motions
Abstract-When executing whole-body motions, humans are able to use a large variety of support poses which not only utilize the feet, but also hands, knees and elbows to enhance stability. While there are many works analyzing the transitions involved in walking, very few works analyze human motion where more complex supports occur. In this work, we analyze complex support pose transitions in human motion involving locomotion and manipulation tasks (loco-manipulation). We have applied a method for the detection of human support contacts from motion capture data to a largescale dataset of loco-manipulation motions involving multicontact supports, providing a semantic representation of them. Our results provide a statistical analysis of the used support poses, their transitions and the time spent in each of them. In addition, our data partially validates our taxonomy of wholebody support poses presented in our previous work. We believe that this work extends our understanding of human motion for humanoids, with a long-term objective of developing methods for autonomous multi-contact motion planning
Locomoção bípede adaptativa a partir de uma única demonstração usando primitivas de movimento
Doutoramento em Engenharia EletrotécnicaEste trabalho aborda o problema de capacidade de imitação da locomoção
humana através da utilização de trajetórias de baixo nível codificadas com
primitivas de movimento e utilizá-las para depois generalizar para novas
situações, partindo apenas de uma demonstração única. Assim, nesta linha de
pensamento, os principais objetivos deste trabalho são dois: o primeiro é
analisar, extrair e codificar demonstrações efetuadas por um humano, obtidas
por um sistema de captura de movimento de forma a modelar tarefas de
locomoção bípede. Contudo, esta transferência não está limitada à simples
reprodução desses movimentos, requerendo uma evolução das capacidades
para adaptação a novas situações, assim como lidar com perturbações
inesperadas. Assim, o segundo objetivo é o desenvolvimento e avaliação de
uma estrutura de controlo com capacidade de modelação das ações, de tal
forma que a demonstração única apreendida possa ser modificada para o robô
se adaptar a diversas situações, tendo em conta a sua dinâmica e o ambiente
onde está inserido.
A ideia por detrás desta abordagem é resolver o problema da generalização a
partir de uma demonstração única, combinando para isso duas estruturas
básicas. A primeira consiste num sistema gerador de padrões baseado em
primitivas de movimento utilizando sistemas dinâmicos (DS). Esta abordagem
de codificação de movimentos possui propriedades desejáveis que a torna ideal
para geração de trajetórias, tais como a possibilidade de modificar determinados
parâmetros em tempo real, tais como a amplitude ou a frequência do ciclo do
movimento e robustez a pequenas perturbações. A segunda estrutura, que está
embebida na anterior, é composta por um conjunto de osciladores acoplados
em fase que organizam as ações de unidades funcionais de forma coordenada.
Mudanças em determinadas condições, como o instante de contacto ou
impactos com o solo, levam a modelos com múltiplas fases. Assim, em vez de
forçar o movimento do robô a situações pré-determinadas de forma temporal, o
gerador de padrões de movimento proposto explora a transição entre diferentes
fases que surgem da interação do robô com o ambiente, despoletadas por
eventos sensoriais. A abordagem proposta é testada numa estrutura de
simulação dinâmica, sendo que várias experiências são efetuadas para avaliar
os métodos e o desempenho dos mesmos.This work addresses the problem of learning to imitate human locomotion actions
through low-level trajectories encoded with motion primitives and generalizing
them to new situations from a single demonstration. In this line of thought, the
main objectives of this work are twofold: The first is to analyze, extract and
encode human demonstrations taken from motion capture data in order to model
biped locomotion tasks. However, transferring motion skills from humans to
robots is not limited to the simple reproduction, but requires the evaluation of
their ability to adapt to new situations, as well as to deal with unexpected
disturbances. Therefore, the second objective is to develop and evaluate a
control framework for action shaping such that the single-demonstration can be
modulated to varying situations, taking into account the dynamics of the robot
and its environment.
The idea behind the approach is to address the problem of generalization from
a single-demonstration by combining two basic structures. The first structure is
a pattern generator system consisting of movement primitives learned and
modelled by dynamical systems (DS). This encoding approach possesses
desirable properties that make them well-suited for trajectory generation, namely
the possibility to change parameters online such as the amplitude and the
frequency of the limit cycle and the intrinsic robustness against small
perturbations. The second structure, which is embedded in the previous one,
consists of coupled phase oscillators that organize actions into functional
coordinated units. The changing contact conditions plus the associated impacts
with the ground lead to models with multiple phases. Instead of forcing the robot’s
motion into a predefined fixed timing, the proposed pattern generator explores
transition between phases that emerge from the interaction of the robot system
with the environment, triggered by sensor-driven events. The proposed approach
is tested in a dynamics simulation framework and several experiments are
conducted to validate the methods and to assess the performance of a humanoid
robot