44 research outputs found

    ILoSA: Interactive Learning of Stiffness and Attractors

    Full text link
    Teaching robots how to apply forces according to our preferences is still an open challenge that has to be tackled from multiple engineering perspectives. This paper studies how to learn variable impedance policies where both the Cartesian stiffness and the attractor can be learned from human demonstrations and corrections with a user-friendly interface. The presented framework, named ILoSA, uses Gaussian Processes for policy learning, identifying regions of uncertainty and allowing interactive corrections, stiffness modulation and active disturbance rejection. The experimental evaluation of the framework is carried out on a Franka-Emika Panda in three separate cases with unique force interaction properties: 1) pulling a plug wherein a sudden force discontinuity occurs upon successful removal of the plug, 2) pushing a box where a sustained force is required to keep the robot in motion, and 3) wiping a whiteboard in which the force is applied perpendicular to the direction of movement

    Interactive Imitation Learning of Bimanual Movement Primitives

    Full text link
    Performing bimanual tasks with dual robotic setups can drastically increase the impact on industrial and daily life applications. However, performing a bimanual task brings many challenges, like synchronization and coordination of the single-arm policies. This article proposes the Safe, Interactive Movement Primitives Learning (SIMPLe) algorithm, to teach and correct single or dual arm impedance policies directly from human kinesthetic demonstrations. Moreover, it proposes a novel graph encoding of the policy based on Gaussian Process Regression (GPR) where the single-arm motion is guaranteed to converge close to the trajectory and then towards the demonstrated goal. Regulation of the robot stiffness according to the epistemic uncertainty of the policy allows for easily reshaping the motion with human feedback and/or adapting to external perturbations. We tested the SIMPLe algorithm on a real dual-arm setup where the teacher gave separate single-arm demonstrations and then successfully synchronized them only using kinesthetic feedback or where the original bimanual demonstration was locally reshaped to pick a box at a different height

    Unifying Speed-Accuracy Trade-Off and Cost-Benefit Trade-Off in Human Reaching Movements

    No full text
    Two basic trade-offs interact while our brain decides how to move our body. First, with the cost-benefit trade-off, the brain trades between the importance of moving faster toward a target that is more rewarding and the increased muscular cost resulting from a faster movement. Second, with the speed-accuracy trade-off, the brain trades between how accurate the movement needs to be and the time it takes to achieve such accuracy. So far, these two trade-offs have been well studied in isolation, despite their obvious interdependence. To overcome this limitation, we propose a new model that is able to simultaneously account for both trade-offs. The model assumes that the central nervous system maximizes the expected utility resulting from the potential reward and the cost over the repetition of many movements, taking into account the probability to miss the target. The resulting model is able to account for both the speed-accuracy and the cost-benefit trade-offs. To validate the proposed hypothesis, we confront the properties of the computational model to data from an experimental study where subjects have to reach for targets by performing arm movements in a horizontal plane. The results qualitatively show that the proposed model successfully accounts for both cost-benefit and speed-accuracy trade-offs

    ACCELERATED ROBOTIC LEARNING FOR INTERACTION WITH ENVIRONMENT AND HUMAN BASED ON SENSORY-MOTOR LEARNING

    Full text link
    V prihodnje je premik robotov iz strukturiranih/nadzorovanih industrijskih in laboratorijskih okolij v človekov vsakdanjik neizogiben. Za uspeŔno vpeljavo robotov v naŔ vsakdanjik je treba reŔiti dva osnovna problema, povezana z izvajanjem takih nalog. (1) Naloge, potrebe, orodja in okolje se razlikujejo med uporabniki. To zahteva pridobitev velike baze zelo specifičnih znanj, zato mora biti prenos znanja z uporabnika na robota enostaven, prilagodljiv in hiter. (2) Pri izvedbi večine vsakdanjih nalog se srečujemo s žičnimi interakcijami z nepredvidljivim in nestrukturiranim okoljem. Zato je pridobivanje modelov okolja zapleten postopek, kar oteži vodenje robota s klasičnimi pristopi. Za dobro delovanje robotov v takem okolju moramo najti alternativne načine učenja žičnih interakcij robota z okoljem. V prvem poglavju je najprej predstavljen uvod v tematiko s pregledom dosedanjega dela na tem področju. Nato so predstavljeni glavni cilji disertacije. Na koncu poglavja pa je povzetek vseh eksperimentov, ki so bili izvedeni v okviru disertacije. V drugem poglavju je najprej sploŔno predstavljena metoda učenja robotov z vključitvijo človekovega senzorično-motoričnega sistema v robotovo regulacijsko zanko. Ta temelji na sposobnosti človekovega senzorično-motoričnega učenja, ki mu omogoča prilagoditev na vodenje robota in kasnejŔi prenos pridobljenega znanja. Sledi opis metod za predstavitev strategije vodenja (znanja). Te obsegajo senzorično-motorični pare, trajektorije gibanja in adaptivne oscilatorje za opis stanja periodičnega gibanja. V zadnjem sklopu pa sta predstavljeni dve metodi strojenega učenja, ki sta bili uporabljeni za robotsko učenje pridobljenih strategij vodenja (Gaussov regresijski proces in lokalno utežena regresija). V tretjem poglavju je predstavljena metoda učenja humanoidnih robotov za primerne odzive telesa na žične interakcije s človekom in z okoljem. Pri tem je bila razvita metoda za pretvorbo senzoričnih informacij o stanju robotovega telesa v senzorično vzbujanje človeka. S tem je bila demonstratorju posredovana potrebna povratna informacija o stanju dinamike robotovega telesa med učnim postopkom. V okviru tega je bil razvit poseben haptični vmesnik, ki je izvajal silo na telo demonstratorja. V drugem delu poglavja je predstavljena metoda sprotnega učenja robota na podlagi delitve odgovornosti med trenutno naučeno strategijo vodenja robota in demonstratorjem. Ta omogoča postopen prenos odgovornosti vodenja z demonstratorja na robota in omogoča dodatno povratno informacije o stanju učenja. V zadnjem delu pa je predlagana metoda za združevanje demonstriranih strategij vodenja robotskega telesa z vodenjem gibanja roke na osnovi inverzne kinematike. V četrtem poglavju so predstavljene predlagane metode učenja robotske manipulacije z nestrukturiranim in nepredvidljivim okoljem. Te metode temeljijo na zmožnosti demonstratorja direktne modulacije in učenja impedance robotske roke. V ta namen so bile razvite metode, ki omogočajo demonstratorju vodenje togosti v realnem času. Pri tem smo reŔevali naloge, povezane z uporabo elementarnih orodij, s sodelovanjem robota z uporabnikom in sestavljanjem predmetov. Te lastnosti so ključne pri prihodnjem delovanju robotov v človekovem vsakdanjiku ali pri njihovi udeležbi pri raziskovanju vesolja, kjer so sredstva omejena. V petem poglavju je predstavljena metoda vodenja eksoskeletov. Ti mehanizmi obdajajo dele človeŔkega telesa in direktno pomagajo pri gibanju v sklepih. V okviru vpeljave robotov v vsakdanjik eksoskeleti predstavljajo komplement humanoidnim robotom, katerih namen je nuditi pomoč človeku na bolj posrednem nivoju. Predlagana metoda vodenja temelji na minimizaciji uporabnikove miŔične aktivnosti prek adaptivnega učenja podpornih sklepnih navorov, ki jih izvaja eksoskelet. Glavna prednost metode je, da ne potrebuje modelov človeka in robota. Potrebni kompenzacijski navori se nenehno prilagajajo trenutnim pogojem. Metodo smo preizkusili z eksperimenti na več subjektih in pri tem analizirali medsebojno prilagajanje eksoskeleta ter uporabnika. V zadnjem poglavju sledi zaključek, v katerem so povzeti glavni prispevki disertacije k znanosti.It is inevitable that the robots will move from the structured and controlled environments, such as industry and laboratories, into our daily lives. To achieve the integration of robots into our daily lives, we have to solve two fundamental problems related to these tasks. (1) Each individual robot user has a different environment, needs, tasks and tools. This demands acquisition of a vast amount of very specific skills, thus the transfer of the skill from the user to the robot must be intuitive, adaptable and fast. (2) Many daily tasks require us to deal with physical interaction with unpredictable and unstructured environment. Because of this, the acquisition of models is very complex and makes the classical robot control difficult. To make robots successfully operate in such environment, it is necessary to find alternative ways to teach the robot how do deal with such interactions. In the first part of the first chapter, there is an introduction into the research field with an overview of the state-of-the-art. The second part explains the goals of the thesis. The last part contains an overview of the performed experiments. The second chapter presents the human-in-the-loop teaching method. The method is based on human sensorimotor learning ability that allows the demonstrator to first obtain the skill necessary for controlling the robot and then transfer that skill to the robot. This is followed by a presentation of methods to encode the control strategy (skill). These methods include: sensorimotor pairs, trajectory of motion and adaptive oscillators for describing the state of periodic motion. In the last part of this chapter, we present two machine learning methods that were utilised in our methods (Gaussian Process Regression and Locally Weighted Regression). The third chapter presents the proposed method for teaching humanoid robots how to deal with physical interaction of its body with human and environment. To this end, a method for converting robot sensory information into a human sensory stimulation was developed to give the demonstrator the necessary feedback about the robot body dynamics during the teaching process. In this scope, we developed a special haptic interface that exerted forces on the demonstrator\u27s body. After that, we present a method for on-line robot learning where the control of the robot body is shared between the currently learnt strategy and human demonstrator. This enables a gradual transfer of the control responsibility from the demonstrator to the robot and offers an additional feedback about the state of the learning. In the last part, we propose a method that allows merging human-demonstrated posture-control skill with arm motion control based on inverse kinematics solution. The fourth chapter presents the proposed methods for teaching robot how to manipulate with unstructured and unpredictable environment. These methods were based on the ability of demonstrator to modulate and teach the impedance of robotic arm. We developed methods that allow the demonstrator to control the robot\u27s stiffness in real-time. This approach was then used to solve tasks related to use of elementary tools, human-robot cooperation and part assembly. These tasks are crucial for future robot operation in human daily lives or in their participation in space exploration, where the available means are limited. The fifth chapter presents a method for exoskeleton control. These devices are made to enclose the human body parts and directly assist the motion in the joints. In the framework of integration of robots into the human daily lives, exoskeletons are a complement to the humanoid robots that are designed to provide assistance on a more indirect level. The proposed control method is based on minimisation of human muscle activity through adaptive learning of robot assistive joint torques. The main advantage of this method is that it does not require models of human and robot. Necessary compensation torques are adaptively derived according to the current conditions. The method was validated on multiple subjects and we analysed the human-robot co-adaptation. The last chapter recapitulates the main contributions of the dissertation and presents its conclusions

    Holding a handle for balance during continuous postural perturbations ā€“ immediate and transitionary effects on whole body posture

    No full text
    When balance is exposed to perturbations, hand contacts are often used to assist postural control. We investigated the immediate and the transitionary effects of supportive hand contacts during continuous anteroposterior perturbations of stance by automated waist-pulls. Ten young adults were perturbed for five minutes and required to maintain balance by holding to a stationary, shoulder-high handle and following its removal. Centre of pressure (COP) displacement, hip, knee, and ankle angles, leg and trunk muscle activity and handle contact forces were acquired. The analysis of results show that COP excursions are significantly smaller when the subjects utilize supportive hand contact and that the displacement of COP is strongly correlated to the perturbation force and significantly larger in the anterior than posterior direction. Regression analysis of hand forces revealed that subjects utilized the hand support significantly more during the posterior than anterior perturbations. Moreover, kinematical analysis showed that utilization of supportive hand contacts alters posture of the whole body and that postural readjustments after the release of the handle occur at different time scales in the hip, knee, and ankle joints. Overall, our findings show that supportive hand contacts are efficiently used for balance control during continuous postural perturbations and that utilization of a handle has significant immediate and transitionary effects on whole body posture

    Towards ergonomic control of human-robot co-manipulation and handover

    Get PDF

    Towards multi-modal intention interfaces for human-robot co-manipulation

    No full text
    Peternel L, Tsagarakis N, Ajoudani A. Towards multi-modal intention interfaces for human-robot co-manipulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2016: 2663-2669
    corecore