1,144,104 research outputs found
Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data
Object manipulation actions represent an important share of the Activities of
Daily Living (ADLs). In this work, we study how to enable service robots to use
human multi-modal data to understand object manipulation actions, and how they
can recognize such actions when humans perform them during human-robot
collaboration tasks. The multi-modal data in this study consists of videos,
hand motion data, applied forces as represented by the pressure patterns on the
hand, and measurements of the bending of the fingers, collected as human
subjects performed manipulation actions. We investigate two different
approaches. In the first one, we show that multi-modal signal (motion, finger
bending and hand pressure) generated by the action can be decomposed into a set
of primitives that can be seen as its building blocks. These primitives are
used to define 24 multi-modal primitive features. The primitive features can in
turn be used as an abstract representation of the multi-modal signal and
employed for action recognition. In the latter approach, the visual features
are extracted from the data using a pre-trained image classification deep
convolutional neural network. The visual features are subsequently used to
train the classifier. We also investigate whether adding data from other
modalities produces a statistically significant improvement in the classifier
performance. We show that both approaches produce a comparable performance.
This implies that image-based methods can successfully recognize human actions
during human-robot collaboration. On the other hand, in order to provide
training data for the robot so it can learn how to perform object manipulation
actions, multi-modal data provides a better alternative
Recommended from our members
Eye-tracking the emergence of attentional anchors in a mathematics learning tablet activity
Little is known about micro-processes by which sensorimotor interaction gives rise to conceptual development. Per embodiment theory, these micro-processes are mediated by dynamical attentional structures. Accordingly this study investigated eye-gaze behaviors during engagement in solving tablet-based bimanual manipulation tasks designed to foster proportional reasoning. Seventy-six elementary- and vocational-school students (9-15 yo) participated in individual task-based clinical interviews. Data gathered included action-logging, eye-tracking, and videography. Analyses revealed the emergence of stable eye-path gaze patterns contemporaneous with first enactments of effective manipulation and prior to verbal articulations of manipulation strategies. Characteristic gaze patterns included consistent or recurring attention to screen locations that bore non-salient stimuli or no stimuli at all yet bore invariant geometric relations to dynamical salient features. Arguably, this research validates empirically hypothetical constructs from constructivism, particularly reflective abstraction
Optimal read/write memory system components
Two holographic data storage and display systems, voltage gradient ionization system, and linear strain manipulation system are discussed in terms of creating fast, high bit density, storage device. Components described include: novel mounting fixture for photoplastic arrays; corona discharge device; and block data composer
Medical Information Management System (MIMS): An automated hospital information system
Flexible system of computer programs allows manipulation and retrieval of data related to patient care. System is written in version of FORTRAN developed for CDC-6600 computer
Multi Visualization and Dynamic Query for Effective Exploration of Semantic Data
Semantic formalisms represent content in a uniform way according to ontologies. This enables manipulation and reasoning via automated means (e.g. Semantic Web services), but limits the user’s ability to explore the semantic data from a point of view that originates from knowledge representation motivations. We show how, for user consumption, a visualization of semantic data according to some easily graspable dimensions (e.g. space and time) provides effective sense-making of data. In this paper, we look holistically at the interaction between users and semantic data, and propose multiple visualization strategies and dynamic filters to support the exploration of semantic-rich data.
We discuss a user evaluation and how interaction challenges could be overcome to create an effective user-centred framework for the visualization and manipulation of semantic data. The approach has been implemented and evaluated on a real company archive
- …
