73,175 research outputs found
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the
level of mechanical task performance. However, their underlying fundamental
biological vs. engineering mechanisms are, by definition, dramatically
different and can even be antithetical. Even our approach to each is
diametrically opposite: inductive science for the study of biological systems
vs. engineering synthesis for the design and construction of robotic systems.
The past 20 years have seen several conceptual advances in both fields and the
quest to unify them. Chief among them is the reluctant recognition that their
underlying fundamental mechanisms may actually share limited common ground,
while exhibiting many fundamental differences. This recognition is particularly
liberating because it allows us to resolve and move beyond multiple paradoxes
and contradictions that arose from the initial reasonable assumption of a large
common ground. Here, we begin by introducing the perspective of neuromechanics,
which emphasizes that real-world behavior emerges from the intimate
interactions among the physical structure of the system, the mechanical
requirements of a task, the feasible neural control actions to produce it, and
the ability of the neuromuscular system to adapt through interactions with the
environment. This allows us to articulate a succinct overview of a few salient
conceptual paradoxes and contradictions regarding under-determined vs.
over-determined mechanics, under- vs. over-actuated control, prescribed vs.
emergent function, learning vs. implementation vs. adaptation, prescriptive vs.
descriptive synergies, and optimal vs. habitual performance. We conclude by
presenting open questions and suggesting directions for future research. We
hope this frank assessment of the state-of-the-art will encourage and guide
these communities to continue to interact and make progress in these important
areas
A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks
Exploiting interaction with the environment is a promising and powerful way
to enhance stability of humanoid robots and robustness while executing
locomotion and manipulation tasks. Recently some works have started to show
advances in this direction considering humanoid locomotion with multi-contacts,
but to be able to fully develop such abilities in a more autonomous way, we
need to first understand and classify the variety of possible poses a humanoid
robot can achieve to balance. To this end, we propose the adaptation of a
successful idea widely used in the field of robot grasping to the field of
humanoid balance with multi-contacts: a whole-body pose taxonomy classifying
the set of whole-body robot configurations that use the environment to enhance
stability. We have revised criteria of classification used to develop grasping
taxonomies, focusing on structuring and simplifying the large number of
possible poses the human body can adopt. We propose a taxonomy with 46 poses,
containing three main categories, considering number and type of supports as
well as possible transitions between poses. The taxonomy induces a
classification of motion primitives based on the pose used for support, and a
set of rules to store and generate new motions. We present preliminary results
that apply known segmentation techniques to motion data from the KIT whole-body
motion database. Using motion capture data with multi-contacts, we can identify
support poses providing a segmentation that can distinguish between locomotion
and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in
landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots
and System
Ground Robotic Hand Applications for the Space Program study (GRASP)
This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens
rendering, which turns only a fraction of the user's real environment into AR
while the rest of the environment remains unaffected. Since handheld AR devices
are commonly equipped with video see-through capabilities, AR magic lens
applications often suffer from spatial distortions, because the AR environment
is presented from the perspective of the camera of the mobile device. Recent
approaches counteract this distortion based on estimations of the user's head
position, rendering the scene from the user's perspective. To this end,
approaches usually apply face-tracking algorithms on the front camera of the
mobile device. However, this demands high computational resources and therefore
commonly affects the performance of the application beyond the already high
computational load of AR applications. In this paper, we present a method to
reduce the computational demands for user perspective rendering by applying
lightweight optical flow tracking and an estimation of the user's motion before
head tracking is started. We demonstrate the suitability of our approach for
computationally limited mobile devices and we compare it to device perspective
rendering, to head tracked user perspective rendering, as well as to fixed
point of view user perspective rendering
Dynamic Composite Data Physicalization Using Wheeled Micro-Robots
This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work
Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces
Graph exploration and editing are still mostly considered independently and
systems to work with are not designed for todays interactive surfaces like
smartphones, tablets or tabletops. When developing a system for those modern
devices that supports both graph exploration and graph editing, it is necessary
to 1) identify what basic tasks need to be supported, 2) what interactions can
be used, and 3) how to map these tasks and interactions. This technical report
provides a list of basic interaction tasks for graph exploration and editing as
a result of an extensive system review. Moreover, different interaction
modalities of interactive surfaces are reviewed according to their interaction
vocabulary and further degrees of freedom that can be used to make interactions
distinguishable are discussed. Beyond the scope of graph exploration and
editing, we provide an approach for finding and evaluating a mapping from tasks
to interactions, that is generally applicable. Thus, this work acts as a
guideline for developing a system for graph exploration and editing that is
specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.
- …