38 research outputs found

    Transformer-based deep imitation learning for dual-arm robot manipulation

    Full text link
    Deep imitation learning is promising for solving dexterous manipulation tasks because it does not require an environment model and pre-programmed robot behavior. However, its application to dual-arm manipulation tasks remains challenging. In a dual-arm manipulation setup, the increased number of state dimensions caused by the additional robot manipulators causes distractions and results in poor performance of the neural networks. We address this issue using a self-attention mechanism that computes dependencies between elements in a sequential input and focuses on important elements. A Transformer, a variant of self-attention architecture, is applied to deep imitation learning to solve dual-arm manipulation tasks in the real world. The proposed method has been tested on dual-arm manipulation tasks using a real robot. The experimental results demonstrated that the Transformer-based deep imitation learning architecture can attend to the important features among the sensory inputs, therefore reducing distractions and improving manipulation performance when compared with the baseline architecture without the self-attention mechanisms.Comment: 8 pages. Accepted in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    An algebraic theory to discriminate qualia in the brain

    Full text link
    The mind-brain problem is to bridge relations between in higher mental events and in lower neural events. To address this, some mathematical models have been proposed to explain how the brain can represent the discriminative structure of qualia, but they remain unresolved due to a lack of validation methods. To understand the qualia discrimination mechanism, we need to ask how the brain autonomously develops such a mathematical structure using the constructive approach. Here we show that a brain model that learns to satisfy an algebraic independence between neural networks separates metric spaces corresponding to qualia types. We formulate the algebraic independence to link it to the other-qualia-type invariant transformation, a familiar formulation of the permanence of perception. The learning of algebraic independence proposed here explains downward causation, i.e. the macro-level relationship has the causal power over its components, because algebra is the macro-level relationship that is irreducible to a law of neurons, and a self-evaluation of algebra is used to control neurons. The downward causation is required to explain a causal role of mental events on neural events, suggesting that learning algebraic structure between neural networks can contribute to the further development of a mathematical theory of consciousness

    Training Robots without Robots: Deep Imitation Learning for Master-to-Robot Policy Transfer

    Full text link
    Deep imitation learning is promising for robot manipulation because it only requires demonstration samples. In this study, deep imitation learning is applied to tasks that require force feedback. However, existing demonstration methods have deficiencies; bilateral teleoperation requires a complex control scheme and is expensive, and kinesthetic teaching suffers from visual distractions from human intervention. This research proposes a new master-to-robot (M2R) policy transfer system that does not require robots for teaching force feedback-based manipulation tasks. The human directly demonstrates a task using a controller. This controller resembles the kinematic parameters of the robot arm and uses the same end-effector with force/torque (F/T) sensors to measure the force feedback. Using this controller, the operator can feel force feedback without a bilateral system. The proposed method can overcome domain gaps between the master and robot using gaze-based imitation learning and a simple calibration method. Furthermore, a Transformer is applied to infer policy from F/T sensory input. The proposed system was evaluated on a bottle-cap-opening task that requires force feedback.Comment: 8 page

    Electrical Excitation of the Pulmonary Venous Musculature May Contribute to the Formation of the Last Component of the High Frequency Signal of the P Wave

    Get PDF
    Pulmonary veins (PVs) have been shown to play an important role in the induction and perpetuation of focal AF. Fifty-one patients with AF, and 24 patients without AF as control subjects, were enrolled in this study. Signal-averaged P-wave recording was performed, and the filtered P wave duration (FPD), the root-mean-square voltage for the last 20, 30 and 40 ms (RMS20, 30, and 40, respectively) were compared. In 7 patients with AF, these parameters were compared before and after the catheter ablation. The FPD was significantly longer and the RMS20 was smaller in the patients with AF than those without AF. Because RMS30 was widely distributed between 2 and 10 µV, the AF group was sub-divided into two groups; Group 1 was comprised of the patients with an RMS30 ≧5.0 µV, and group 2, <5.0 µV. In group 1, short-coupled PACs were more frequently documented on Holter monitoring, and exercise testing more readily induced AF. After successful electrical disconnection between the LA and PVs, each micropotential parameter was significantly attenuated. These results indicate that the high frequency signal amplitude of the last component of the P wave is relatively high in patients with AF triggered by focal repetitive excitations most likely originating from the PVs. That is, attenuation by the LA-PV electrical isolation, and thus the high frequency P signals of the last component, may contain the electrical excitation of the PV musculature

    Modelling human choices: MADeM and decision‑making

    Get PDF
    Research supported by FAPESP 2015/50122-0 and DFG-GRTK 1740/2. RP and AR are also part of the Research, Innovation and Dissemination Center for Neuromathematics FAPESP grant (2013/07699-0). RP is supported by a FAPESP scholarship (2013/25667-8). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0)

    A.Nagakubo. Conformable and scalable tactile sensor skin for curved surfaces

    No full text
    Abstract — We present the design and realization of a conformable tactile sensor skin(patent pending). The skin is organized as a network of self-contained modules consisting of tiny pressure-sensitive elements which communicate through a serial bus. By adding or removing modules it is possible to adjust the area covered by the skin as well as the number (and density) of tactile elements. The skin is therefore highly modular and thus intrinsically scalable. Moreover, because the substrate on which the modules are mounted is sufficiently pliable to be folded and stiff enough to be cut, it is possible to freely distribute the individual tactile elements. A tactile skin composed of multiple modules can also be installed on curved surfaces. Due to their easy configurability we call our sensors “cut-and-paste tactile sensors. ” We describe a prototype implementation of the skin on a humanoid robot. I

    Robot peels banana with goal-conditioned dual-action deep imitation learning

    Full text link
    A long-horizon dexterous robot manipulation task of deformable objects, such as banana peeling, is problematic because of difficulties in object modeling and a lack of knowledge about stable and dexterous manipulation skills. This paper presents a goal-conditioned dual-action deep imitation learning (DIL) which can learn dexterous manipulation skills using human demonstration data. Previous DIL methods map the current sensory input and reactive action, which easily fails because of compounding errors in imitation learning caused by recurrent computation of actions. The proposed method predicts reactive action when the precise manipulation of the target object is required (local action) and generates the entire trajectory when the precise manipulation is not required. This dual-action formulation effectively prevents compounding error with the trajectory-based global action while respond to unexpected changes in the target object with the reactive local action. Furthermore, in this formulation, both global/local actions are conditioned by a goal state which is defined as the last step of each subtask, for robust policy prediction. The proposed method was tested in the real dual-arm robot and successfully accomplished the banana peeling task.Comment: 15 page

    Multi-task real-robot data with gaze attention for dual-arm fine manipulation

    Full text link
    In the field of robotic manipulation, deep imitation learning is recognized as a promising approach for acquiring manipulation skills. Additionally, learning from diverse robot datasets is considered a viable method to achieve versatility and adaptability. In such research, by learning various tasks, robots achieved generality across multiple objects. However, such multi-task robot datasets have mainly focused on single-arm tasks that are relatively imprecise, not addressing the fine-grained object manipulation that robots are expected to perform in the real world. This paper introduces a dataset of diverse object manipulations that includes dual-arm tasks and/or tasks requiring fine manipulation. To this end, we have generated dataset with 224k episodes (150 hours, 1,104 language instructions) which includes dual-arm fine tasks such as bowl-moving, pencil-case opening or banana-peeling, and this data is publicly available. Additionally, this dataset includes visual attention signals as well as dual-action labels, a signal that separates actions into a robust reaching trajectory and precise interaction with objects, and language instructions to achieve robust and precise object manipulation. We applied the dataset to our Dual-Action and Attention (DAA), a model designed for fine-grained dual arm manipulation tasks and robust against covariate shifts. The model was tested with over 7k total trials in real robot manipulation tasks, demonstrating its capability in fine manipulation.Comment: 10 pages, The dataset is available at https://sites.google.com/view/multi-task-fin
    corecore