2,439 research outputs found
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the
level of mechanical task performance. However, their underlying fundamental
biological vs. engineering mechanisms are, by definition, dramatically
different and can even be antithetical. Even our approach to each is
diametrically opposite: inductive science for the study of biological systems
vs. engineering synthesis for the design and construction of robotic systems.
The past 20 years have seen several conceptual advances in both fields and the
quest to unify them. Chief among them is the reluctant recognition that their
underlying fundamental mechanisms may actually share limited common ground,
while exhibiting many fundamental differences. This recognition is particularly
liberating because it allows us to resolve and move beyond multiple paradoxes
and contradictions that arose from the initial reasonable assumption of a large
common ground. Here, we begin by introducing the perspective of neuromechanics,
which emphasizes that real-world behavior emerges from the intimate
interactions among the physical structure of the system, the mechanical
requirements of a task, the feasible neural control actions to produce it, and
the ability of the neuromuscular system to adapt through interactions with the
environment. This allows us to articulate a succinct overview of a few salient
conceptual paradoxes and contradictions regarding under-determined vs.
over-determined mechanics, under- vs. over-actuated control, prescribed vs.
emergent function, learning vs. implementation vs. adaptation, prescriptive vs.
descriptive synergies, and optimal vs. habitual performance. We conclude by
presenting open questions and suggesting directions for future research. We
hope this frank assessment of the state-of-the-art will encourage and guide
these communities to continue to interact and make progress in these important
areas
Toward Robots with Peripersonal Space Representation for Adaptive Behaviors
The abilities to adapt and act autonomously in an unstructured and
human-oriented environment are necessarily vital for the next generation of
robots, which aim to safely cooperate with humans. While this adaptability
is natural and feasible for humans, it is still very complex and challenging
for robots. Observations and findings from psychology and neuroscience in
respect to the development of the human sensorimotor system can inform
the development of novel approaches to adaptive robotics.
Among these is the formation of the representation of space closely surrounding
the body, the Peripersonal Space (PPS) , from multisensory sources
like vision, hearing, touch and proprioception, which helps to facilitate human
activities within their surroundings.
Taking inspiration from the virtual safety margin formed by the PPS representation
in humans, this thesis first constructs an equivalent model of the
safety zone for each body part of the iCub humanoid robot. This PPS layer
serves as a distributed collision predictor, which translates visually detected
objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities
of a collision between those objects and body parts. This leads to
adaptive avoidance behaviors in the robot via an optimization-based reactive
controller. Notably, this visual reactive control pipeline can also seamlessly
incorporate tactile input to guarantee safety in both pre- and post-collision
phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller
is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components,
namely the PPS, the multi-target motion planner (for manipulation
reaching tasks), the reaching-with-avoidance controller and the humancentred
visual perception, are combined harmoniously to form a hybrid control
framework designed to provide safety for robots\u2019 interactions in a cluttered
environment shared with human partners.
Later, motivated by the development of manipulation skills in infants, in
which the multisensory integration is thought to play an important role, a
learning framework is proposed to allow a robot to learn the processes of
forming sensory representations, namely visuomotor and visuotactile, from
their own motor activities in the environment. Both multisensory integration
models are constructed with Deep Neural Networks (DNNs) in such a
way that their outputs are represented in motor space to facilitate the robot\u2019s
subsequent actions
Robot control in a message passing environment: theoretical questions and preliminary experiments
The performance of real-time distributed control systems is shown to depend critically on both communication and computation costs. A taxonomy for distributed system performance measurement is introduced. A roughly accurate method of performance prediction for simple systems is presented. Experimental results demonstrate the effects of communication protocols on real-world system performance
Learning Latent Space Dynamics for Tactile Servoing
To achieve a dexterous robotic manipulation, we need to endow our robot with
tactile feedback capability, i.e. the ability to drive action based on tactile
sensing. In this paper, we specifically address the challenge of tactile
servoing, i.e. given the current tactile sensing and a target/goal tactile
sensing --memorized from a successful task execution in the past-- what is the
action that will bring the current tactile sensing to move closer towards the
target tactile sensing at the next time step. We develop a data-driven approach
to acquire a dynamics model for tactile servoing by learning from
demonstration. Moreover, our method represents the tactile sensing information
as to lie on a surface --or a 2D manifold-- and perform a manifold learning,
making it applicable to any tactile skin geometry. We evaluate our method on a
contact point tracking task using a robot equipped with a tactile finger. A
video demonstrating our approach can be seen in https://youtu.be/0QK0-Vx7WkIComment: Accepted to be published at the International Conference on Robotics
and Automation (ICRA) 2019. The final version for publication at ICRA 2019 is
7 pages (i.e. 6 pages of technical content (including text, figures, tables,
acknowledgement, etc.) and 1 page of the Bibliography/References), while this
arXiv version is 8 pages (added Appendix and some extra details
- …