1,591 research outputs found
Motion Switching with Sensory and Instruction Signals by designing Dynamical Systems using Deep Neural Network
To ensure that a robot is able to accomplish an extensive range of tasks, it
is necessary to achieve a flexible combination of multiple behaviors. This is
because the design of task motions suited to each situation would become
increasingly difficult as the number of situations and the types of tasks
performed by them increase. To handle the switching and combination of multiple
behaviors, we propose a method to design dynamical systems based on point
attractors that accept (i) "instruction signals" for instruction-driven
switching. We incorporate the (ii) "instruction phase" to form a point
attractor and divide the target task into multiple subtasks. By forming an
instruction phase that consists of point attractors, the model embeds a subtask
in the form of trajectory dynamics that can be manipulated using sensory and
instruction signals. Our model comprises two deep neural networks: a
convolutional autoencoder and a multiple time-scale recurrent neural network.
In this study, we apply the proposed method to manipulate soft materials. To
evaluate our model, we design a cloth-folding task that consists of four
subtasks and three patterns of instruction signals, which indicate the
direction of motion. The results depict that the robot can perform the required
task by combining subtasks based on sensory and instruction signals. And, our
model determined the relations among these signals using its internal dynamics.Comment: 8 pages, 6 figures, accepted for publication in RA-L. An accompanied
video is available at this https://youtu.be/a73KFtOOB5
Compensation for undefined behaviors during robot task execution by switching controllers depending on embedded dynamics in RNN
Robotic applications require both correct task performance and compensation
for undefined behaviors. Although deep learning is a promising approach to
perform complex tasks, the response to undefined behaviors that are not
reflected in the training dataset remains challenging. In a human-robot
collaborative task, the robot may adopt an unexpected posture due to collisions
and other unexpected events. Therefore, robots should be able to recover from
disturbances for completing the execution of the intended task. We propose a
compensation method for undefined behaviors by switching between two
controllers. Specifically, the proposed method switches between learning-based
and model-based controllers depending on the internal representation of a
recurrent neural network that learns task dynamics. We applied the proposed
method to a pick-and-place task and evaluated the compensation for undefined
behaviors. Experimental results from simulations and on a real robot
demonstrate the effectiveness and high performance of the proposed method.Comment: To appear in IEEE Robotics and Automation Letters (RA-L) and IEEE
International Conference on Robotics and Automation (ICRA 2021
Stable deep reinforcement learning method by predicting uncertainty in rewards as a subtask
In recent years, a variety of tasks have been accomplished by deep
reinforcement learning (DRL). However, when applying DRL to tasks in a
real-world environment, designing an appropriate reward is difficult. Rewards
obtained via actual hardware sensors may include noise, misinterpretation, or
failed observations. The learning instability caused by these unstable signals
is a problem that remains to be solved in DRL. In this work, we propose an
approach that extends existing DRL models by adding a subtask to directly
estimate the variance contained in the reward signal. The model then takes the
feature map learned by the subtask in a critic network and sends it to the
actor network. This enables stable learning that is robust to the effects of
potential noise. The results of experiments in the Atari game domain with
unstable reward signals show that our method stabilizes training convergence.
We also discuss the extensibility of the model by visualizing feature maps.
This approach has the potential to make DRL more practical for use in noisy,
real-world scenarios.Comment: Published as a conference paper at ICONIP 202
The Future of Humanoid Robots
This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book
Opinions and Outlooks on Morphological Computation
Morphological Computation is based on the observation that biological systems seem to carry out relevant computations with their morphology (physical body) in order to successfully interact with their environments. This can be observed in a whole range of systems and at many different scales. It has been studied in animals – e.g., while running, the functionality of coping with impact and slight unevenness in the ground is "delivered" by the shape of the legs and the damped elasticity of the muscle-tendon system – and plants, but it has also been observed at the cellular and even at the molecular level – as seen, for example, in spontaneous self-assembly. The concept of morphological computation has served as an inspirational resource to build bio-inspired robots, design novel approaches for support systems in health care, implement computation with natural systems, but also in art and architecture. As a consequence, the field is highly interdisciplinary, which is also nicely reflected in the wide range of authors that are featured in this e-book. We have contributions from robotics, mechanical engineering, health, architecture, biology, philosophy, and others
Towards a normative understanding of higher-order brain activity
Behavioural flexibility is a major hallmark of animal intelligence and higher-order brain areas such as the prefrontal cortex are thought to play a crucial role in it: They are particularly well connected to the rest of the brain, they are disproportionately well developed in higher primates, and they have been shown to be involved in the executive functions that are thought to enable this flexibility, such as decision making, working
memory, planning, and attention. With the advent of novel recording techniques, we are gaining an increasingly complete view of higher-order brain activity while animals perform behavioural tasks. Using these data, we can now try to add to the conceptual understanding of higher-order brain functions that we already have by elucidating their neural basis
Online Self-Supervised Learning for Object Picking: Detecting Optimum Grasping Position using a Metric Learning Approach
Self-supervised learning methods are attractive candidates for automatic
object picking. However, the trial samples lack the complete ground truth
because the observable parts of the agent are limited. That is, the information
contained in the trial samples is often insufficient to learn the specific
grasping position of each object. Consequently, the training falls into a local
solution, and the grasp positions learned by the robot are independent of the
state of the object. In this study, the optimal grasping position of an
individual object is determined from the grasping score, defined as the
distance in the feature space obtained using metric learning. The closeness of
the solution to the pre-designed optimal grasping position was evaluated in
trials. The proposed method incorporates two types of feedback control: one
feedback enlarges the grasping score when the grasping position approaches the
optimum; the other reduces the negative feedback of the potential grasping
positions among the grasping candidates. The proposed online self-supervised
learning method employs two deep neural networks. : SSD that detects the
grasping position of an object, and Siamese networks (SNs) that evaluate the
trial sample using the similarity of two input data in the feature space. Our
method embeds the relation of each grasping position as feature vectors by
training the trial samples and a few pre-samples indicating the optimum
grasping position. By incorporating the grasping score based on the feature
space of SNs into the SSD training process, the method preferentially trains
the optimum grasping position. In the experiment, the proposed method achieved
a higher success rate than the baseline method using simple teaching signals.
And the grasping scores in the feature space of the SNs accurately represented
the grasping positions of the objects.Comment: 8 page
Toward a formal theory for computing machines made out of whatever physics offers: extended version
Approaching limitations of digital computing technologies have spurred
research in neuromorphic and other unconventional approaches to computing. Here
we argue that if we want to systematically engineer computing systems that are
based on unconventional physical effects, we need guidance from a formal theory
that is different from the symbolic-algorithmic theory of today's computer
science textbooks. We propose a general strategy for developing such a theory,
and within that general view, a specific approach that we call "fluent
computing". In contrast to Turing, who modeled computing processes from a
top-down perspective as symbolic reasoning, we adopt the scientific paradigm of
physics and model physical computing systems bottom-up by formalizing what can
ultimately be measured in any physical substrate. This leads to an
understanding of computing as the structuring of processes, while classical
models of computing systems describe the processing of structures.Comment: 76 pages. This is an extended version of a perspective article with
the same title that will appear in Nature Communications soon after this
manuscript goes public on arxi
- …