53 research outputs found
Introduction for speech and language for interactive robots
This special issue includes research articles which apply spoken language processing to robots that interact with human users through speech, possibly combined with other modalities. Robots that can listen to human speech, understand it, interact according to the conveyed meaning, and respond represent major research and technological challenges. Their common aim is to equip robots with natural interaction abilities. However, robotics and spoken language processing are areas that are typically studied within their respective communities with limited communication across disciplinary boundaries. The articles in this special issue represent examples that address the need for an increased multidisciplinary exchange of ideas
Recognizing Intent in Collaborative Manipulation
Collaborative manipulation is inherently multimodal, with haptic
communication playing a central role. When performed by humans, it involves
back-and-forth force exchanges between the participants through which they
resolve possible conflicts and determine their roles. Much of the existing work
on collaborative human-robot manipulation assumes that the robot follows the
human. But for a robot to match the performance of a human partner it needs to
be able to take initiative and lead when appropriate. To achieve such
human-like performance, the robot needs to have the ability to (1) determine
the intent of the human, (2) clearly express its own intent, and (3) choose its
actions so that the dyad reaches consensus. This work proposes a framework for
recognizing human intent in collaborative manipulation tasks using force
exchanges. Grounded in a dataset collected during a human study, we introduce a
set of features that can be computed from the measured signals and report the
results of a classifier trained on our collected human-human interaction data.
Two metrics are used to evaluate the intent recognizer: overall accuracy and
the ability to correctly identify transitions. The proposed recognizer shows
robustness against the variations in the partner's actions and the confounding
effects due to the variability in grasp forces and dynamic effects of walking.
The results demonstrate that the proposed recognizer is well-suited for
implementation in a physical interaction control scheme
Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data
Object manipulation actions represent an important share of the Activities of
Daily Living (ADLs). In this work, we study how to enable service robots to use
human multi-modal data to understand object manipulation actions, and how they
can recognize such actions when humans perform them during human-robot
collaboration tasks. The multi-modal data in this study consists of videos,
hand motion data, applied forces as represented by the pressure patterns on the
hand, and measurements of the bending of the fingers, collected as human
subjects performed manipulation actions. We investigate two different
approaches. In the first one, we show that multi-modal signal (motion, finger
bending and hand pressure) generated by the action can be decomposed into a set
of primitives that can be seen as its building blocks. These primitives are
used to define 24 multi-modal primitive features. The primitive features can in
turn be used as an abstract representation of the multi-modal signal and
employed for action recognition. In the latter approach, the visual features
are extracted from the data using a pre-trained image classification deep
convolutional neural network. The visual features are subsequently used to
train the classifier. We also investigate whether adding data from other
modalities produces a statistically significant improvement in the classifier
performance. We show that both approaches produce a comparable performance.
This implies that image-based methods can successfully recognize human actions
during human-robot collaboration. On the other hand, in order to provide
training data for the robot so it can learn how to perform object manipulation
actions, multi-modal data provides a better alternative
ICS Materials. Towards a re-Interpretation of material qualities through interactive, connected, and smart materials.
The domain of materials for design is changing under the influence of an increased technological
advancement, miniaturization and democratization. Materials are becoming connected,
augmented, computational, interactive, active, responsive, and dynamic. These are ICS
Materials, an acronym that stands for Interactive, Connected and Smart. While labs around the
world are experimenting with these new materials, there is the need to reflect on their
potentials and impact on design. This paper is a first step in this direction: to interpret and
describe the qualities of ICS materials, considering their experiential pattern, their expressive sensorial dimension, and their aesthetic of interaction. Through case studies, we analyse and classify these emerging ICS Materials and identified common characteristics, and challenges, e.g. the ability to change over time or their programmability by the designers and users. On that basis, we argue there is the need to reframe and redesign existing models to describe ICS materials, making their qualities emerge
- …