36 research outputs found

    Few-Shot Learning of Force-Based Motions From Demonstration Through Pre-training of Haptic Representation

    Full text link
    In many contact-rich tasks, force sensing plays an essential role in adapting the motion to the physical properties of the manipulated object. To enable robots to capture the underlying distribution of object properties necessary for generalising learnt manipulation tasks to unseen objects, existing Learning from Demonstration (LfD) approaches require a large number of costly human demonstrations. Our proposed semi-supervised LfD approach decouples the learnt model into an haptic representation encoder and a motion generation decoder. This enables us to pre-train the first using large amount of unsupervised data, easily accessible, while using few-shot LfD to train the second, leveraging the benefits of learning skills from humans. We validate the approach on the wiping task using sponges with different stiffness and surface friction. Our results demonstrate that pre-training significantly improves the ability of the LfD model to recognise physical properties and generate desired wiping motions for unseen sponges, outperforming the LfD method without pre-training. We validate the motion generated by our semi-supervised LfD model on the physical robot hardware using the KUKA iiwa robot arm. We also validate that the haptic representation encoder, pre-trained in simulation, captures the properties of real objects, explaining its contribution to improving the generalisation of the downstream task

    Large eddy simulation of smooth-wall, transitional and fully rough-wall channel flow

    Get PDF
    Large eddy simulation (LES) is reported for both smooth and rough-wall channel flows at resolutions for which the roughness is subgrid. The stretched vortex, subgrid-scale model is combined with an existing wall-model that calculates the local friction velocity dynamically while providing a Dirichlet-like slip velocity at a slightly raised wall. This wall model is presently extended to include the effects of subgrid wall roughness by the incorporation of the Hama's roughness function ΔU^+(k^+_(s∞)) that depends on some geometric roughness height k_(s∞) scaled in inner variables. Presently Colebrook's empirical roughness function is used but the model can utilize any given function of an arbitrary number of inner-scaled, roughness length parameters. This approach requires no change to the interior LES and can handle both smooth and rough walls. The LES is applied to fully turbulent, smooth, and rough-wall channel flow in both the transitional and fully rough regimes. Both roughness and Reynolds number effects are captured for Reynolds numbers Re_b based on the bulk flow speed in the range 10^4–10^(10) with the equivalent Re_τ, based on the wall-drag velocity u_τ varying from 650 to 10^8. Results include a Moody-like diagram for the friction factor f = f(Re_b, ∈), ∈ = k_(s∞)/δ, mean velocity profiles, and turbulence statistics. In the fully rough regime, at sufficiently large Re_b, the mean velocity profiles show collapse in outer variables onto a roughness modified, universal, velocity-deficit profile. Outer-flow stream-wise turbulence intensities scale well with u_τ for both smooth and rough-wall flow, showing a log-like profile. The infinite Reynolds number limits of both smooth and rough-wall flows are explored. An assumption that, for smooth-wall flow, the turbulence intensities scaled on u_τ are bounded above by the sum of a logarithmic profile plus a finite function across the whole channel suggests that the infinite Re_b limit is inviscid slip flow without turbulence. The asymptote, however, is extremely slow. Turbulent rough-wall flow that conforms to the Hama model shows a finite limit containing turbulence intensities that scale on the friction factor for any small but finite roughness

    Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks

    Get PDF
    We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constructed with deep neural networks (DNNs) using multimodal sensorimotor data; image, force, and joint angle information. To allow the robot to learn tool-use, we collect training data by controlling the robot to perform various object operations using several tools with multiple actions that leads different effects. Then the tool-use model is thereby trained and learns sensorimotor coordination and acquires relationships among tools, objects, actions and effects in its latent space. We can give the robot a task goal by providing an image showing the target placement and orientation of the object. Using the goal image with the tool-use model, the robot detects the features of tools and objects, and determines how to act to reproduce the target effects automatically. Then the robot generates actions adjusting to the real time situations even though the tools and objects are unknown and more complicated than trained ones

    Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot

    Full text link
    To support humans in their daily lives, robots are required to autonomously learn, adapt to objects and environments, and perform the appropriate actions. We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously. In previous works, handling changing objects was found to be challenging because sensory information includes dynamical, both important or noisy information, and the modality which should be focused on changes every time, making it difficult to realize both perception and motion generation in real time. We propose a predictive recurrent neural network with an attention mechanism that can weigh the sensor input, distinguishing how important and reliable each modality is, that realize quick and efficient perception and motion generation. The model is trained with learning from the demonstration, and allows the robot to acquire human-like skills. We validated the proposed technique using the robot, Dry-AIREC, and with our learning model, it could perform cooking eggs with unknown ingredients. The robot could change the method of stirring and direction depending on the status of the egg, as in the beginning it stirs in the whole pot, then subsequently, after the egg started being heated, it starts flipping and splitting motion targeting specific areas, although we did not explicitly indicate them
    corecore