3,236 research outputs found

    A model-based residual approach for human-robot collaboration during manual polishing operations

    Get PDF
    A fully robotized polishing of metallic surfaces may be insufficient in case of parts with complex geometric shapes, where a manual intervention is still preferable. Within the EU SYMPLEXITY project, we are considering tasks where manual polishing operations are performed in strict physical Human-Robot Collaboration (HRC) between a robot holding the part and a human operator equipped with an abrasive tool. During the polishing task, the robot should firmly keep the workpiece in a prescribed sequence of poses, by monitoring and resisting to the external forces applied by the operator. However, the user may also wish to change the orientation of the part mounted on the robot, simply by pushing or pulling the robot body and changing thus its configuration. We propose a control algorithm that is able to distinguish the external torques acting at the robot joints in two components, one due to the polishing forces being applied at the end-effector level, the other due to the intentional physical interaction engaged by the human. The latter component is used to reconfigure the manipulator arm and, accordingly, its end-effector orientation. The workpiece position is kept instead fixed, by exploiting the intrinsic redundancy of this subtask. The controller uses a F/T sensor mounted at the robot wrist, together with our recently developed model-based technique (the residual method) that is able to estimate online the joint torques due to contact forces/torques applied at any place along the robot structure. In order to obtain a reliable residual, which is necessary to implement the control algorithm, an accurate robot dynamic model (including also friction effects at the joints and drive gains) needs to be identified first. The complete dynamic identification and the proposed control method for the human-robot collaborative polishing task are illustrated on a 6R UR10 lightweight manipulator mounting an ATI 6D sensor

    Contact Estimation in Robot Interaction

    Get PDF
    In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation

    ์ง€๋„ ๋ฐ ๋น„์ง€๋„ ํ•™์Šต์„ ์ด์šฉํ•œ ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ ์ถฉ๋Œ ๊ฐ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2022.2. ๋ฐ•์ข…์šฐ.์‚ฌ๋žŒ๊ณผ ๊ณต์œ ๋œ ๊ตฌ์กฐํ™”๋˜์ง€ ์•Š์€ ๋™์  ํ™˜๊ฒฝ์—์„œ ์ž‘๋™ํ•˜๋Š” ํ˜‘์—… ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋Š” ๋‚ ์นด๋กœ์šด ์ถฉ๋Œ(๊ฒฝ์„ฑ ์ถฉ๋Œ)์—์„œ ๋” ๊ธด ์ง€์† ์‹œ๊ฐ„์˜ ๋ฐ€๊ณ  ๋‹น๊ธฐ๋Š” ๋™์ž‘(์—ฐ์„ฑ ์ถฉ๋Œ)์— ์ด๋ฅด๊ธฐ๊นŒ์ง€์˜ ๋‹ค์–‘ํ•œ ์ถฉ๋Œ์„ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•ด์•ผ ํ•œ๋‹ค. ๋ชจํ„ฐ ์ „๋ฅ˜ ์ธก์ •๊ฐ’์„ ์ด์šฉํ•ด ์™ธ๋ถ€ ์กฐ์ธํŠธ ํ† ํฌ๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋™์—ญํ•™ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, ์ •ํ™•ํ•œ ๋งˆ์ฐฐ ํŒŒ๋ผ๋ฏธํ„ฐ ๋ชจ๋ธ๋ง ๋ฐ ์‹๋ณ„๊ณผ ๊ฐ™์€ ๋ชจํ„ฐ ๋งˆ์ฐฐ์— ๋Œ€ํ•œ ์ ์ ˆํ•œ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ ์šฉํ•˜๋ฉด ๋งค์šฐ ํšจ๊ณผ์ ์ด์ง€๋งŒ, ๋™์—ญํ•™๊ณผ ๋งˆ์ฐฐ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ชจ๋ธ๋ง ๋ฐ ์‹๋ณ„ํ•˜๊ณ  ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์„ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•˜๋Š” ๋ฐ์—๋Š” ์ƒ๋‹นํ•œ ๋…ธ๋ ฅ์ด ํ•„์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ๋˜๋Š” ์‚ฐ์—…์šฉ ๋กœ๋ด‡์— ์ด๋ฅผ ์ ์šฉํ•˜๊ธฐ๋Š” ์–ด๋ ต๋‹ค. ๋˜ํ•œ ์ ์ ˆํ•œ ์‹๋ณ„ ํ›„์—๋„ ๋™์—ญํ•™์— ๋ฐฑ๋ž˜์‹œ, ํƒ„์„ฑ ๋“ฑ ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ํšจ๊ณผ๋‚˜ ๋ถˆํ™•์‹ค์„ฑ์ด ์—ฌ์ „ํžˆ ์กด์žฌํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ˆœ์ˆ˜ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์˜ ๊ตฌํ˜„ ์–ด๋ ค์›€์„ ํ”ผํ•˜๊ณ  ๋ถˆํ™•์‹คํ•œ ๋™์—ญํ•™์  ํšจ๊ณผ๋ฅผ ๋ณด์ƒํ•˜๋Š” ์ˆ˜๋‹จ์œผ๋กœ ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์œ„ํ•œ ์ด ๋„ค ๊ฐ€์ง€์˜ ํ•™์Šต ๊ธฐ๋ฐ˜ ์ถฉ๋Œ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋‘ ๊ฐœ์˜ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ์ถฉ๋Œ ๋ฐ ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ชจ๋‘ ํ•„์š”ํ•œ ์ง€๋„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜(์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹  ํšŒ๊ท€, ์ผ์ฐจ์› ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜)์„ ์‚ฌ์šฉํ•˜๋ฉฐ ๋‚˜๋จธ์ง€ ๋‘ ๊ฐœ์˜ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋Š” ๋น„์ง€๋„ ์ด์ƒ์น˜ ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜(๋‹จ์ผ ํด๋ž˜์Šค ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹ , ์˜คํ† ์ธ์ฝ”๋” ๊ธฐ๋ฐ˜)์— ๊ธฐ๋ฐ˜ํ•œ๋‹ค. ๋กœ๋ด‡ ๋™์—ญํ•™ ๋ชจ๋ธ๊ณผ ๋ชจํ„ฐ ์ „๋ฅ˜ ์ธก์ •๊ฐ’๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋ฉฐ ์ถ”๊ฐ€์ ์ธ ์™ธ๋ถ€ ์„ผ์„œ๋‚˜ ๋งˆ์ฐฐ ๋ชจ๋ธ๋ง, ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์ˆ˜๋™ ์กฐ์ •์€ ํ•„์š”ํ•˜์ง€ ์•Š๋‹ค. ๋จผ์ € ์ง€๋„ ๋ฐ ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ํ•™์Šต์‹œํ‚ค๊ณ  ๊ฒ€์ฆํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š”, 6์ž์œ ๋„ ํ˜‘์—… ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ์ˆ˜์ง‘๋œ ๋กœ๋ด‡ ์ถฉ๋Œ ๋ฐ์ดํ„ฐ๋ฅผ ์„ค๋ช…ํ•œ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ๊ณ ๋ คํ•˜๋Š” ์ถฉ๋Œ ์‹œ๋‚˜๋ฆฌ์˜ค๋Š” ๊ฒฝ์„ฑ ์ถฉ๋Œ, ์—ฐ์„ฑ ์ถฉ๋Œ, ๋น„์ถฉ๋Œ ๋™์ž‘์œผ๋กœ, ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์€ ๋ชจ๋‘ ๋™์ผํ•˜๊ฒŒ ์ถฉ๋Œ๋กœ ๊ฐ„์ฃผํ•œ๋‹ค. ๊ฐ์ง€ ์„ฑ๋Šฅ ๊ฒ€์ฆ์„ ์œ„ํ•œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋Š” ์ด 787๊ฑด์˜ ์ถฉ๋Œ๊ณผ 62.4๋ถ„์˜ ๋น„์ถฉ๋Œ ๋™์ž‘์œผ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์œผ๋ฉฐ, ์ด๋Š” ๋กœ๋ด‡์ด ๋žœ๋ค ์ ๋Œ€์  6๊ด€์ ˆ ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์•ˆ ์ˆ˜์ง‘๋œ๋‹ค. ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์ค‘ ๋กœ๋ด‡์˜ ๋๋‹จ์—๋Š” ๋ฏธ๋ถ€์ฐฉ, 3.3 kg, 5.0 kg์˜ ์„ธ ๊ฐ€์ง€ ์œ ํ˜•์˜ ํŽ˜์ด๋กœ๋“œ๋ฅผ ๋ถ€์ฐฉํ•œ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ˆ˜์ง‘๋œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐ์ง€ ์„ฑ๋Šฅ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์ด ๊ฐ€๋ฒผ์šด ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•ด ๊ด‘๋ฒ”์œ„ํ•œ ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๋ถˆํ™•์‹ค์„ฑ๊ณผ ์ธก์ • ๋…ธ์ด์ฆˆ, ๋ฐฑ๋ž˜์‹œ, ๋ณ€ํ˜• ๋“ฑ ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ํšจ๊ณผ๊นŒ์ง€ ๋ณด์ƒ๋จ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹  ํšŒ๊ท€ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ํ•˜๋‚˜์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ ํ•„์š”ํ•˜๋ฉฐ ์ผ์ฐจ์› ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ํ•˜๋‚˜์˜ ์•„์›ƒํ’‹ ํ•„ํ„ฐ ํŒŒ๋ผ๋ฏธํ„ฐ์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ ํ•„์š”ํ•œ๋ฐ, ๋‘ ๋ฐฉ๋ฒ• ๋ชจ๋‘ ์ง๊ด€์ ์ธ ๊ฐ๋„ ์กฐ์ •์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ๋‚˜์•„๊ฐ€ ์ผ๋ จ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜์„ ํ†ตํ•ด ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐ์ง€ ์„ฑ๋Šฅ๊ณผ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ ๋˜ํ•œ ๊ฒ€์ฆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ• ๋˜ํ•œ ๊ฐ€๋ฒผ์šด ๊ณ„์‚ฐ๊ณผ ํ•˜๋‚˜์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ์œผ๋กœ ๋‹ค์–‘ํ•œ ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ฐ•์ธํ•˜๊ฒŒ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ๋งˆ์ฐฐ์„ ํฌํ•จํ•œ ๋ถˆํ™•์‹คํ•œ ๋™์—ญํ•™์  ํšจ๊ณผ๋ฅผ ๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ๋„ ๋ณด์ƒํ•  ์ˆ˜ ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์ด ๋” ๋‚˜์€ ๊ฐ์ง€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋ฉฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ์ถฉ๋Œ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ํ•„์š”๋กœ ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ๋˜๋Š” ์‚ฐ์—…์šฉ ๋กœ๋ด‡์— ๋” ์ ํ•ฉํ•˜๋‹ค.Collaborative robot manipulators operating in dynamic and unstructured environments shared with humans require fast and accurate detection of collisions, which can range from sharp impacts (hard collisions) to pulling and pushing motions of longer duration (soft collisions). When using dynamics model-based detection methods that estimate the external joint torque with motor current measurements, proper treatment for friction in the motors is required, such as accurate modeling and identification of friction parameters. Although highly effective when done correctly, modeling and identifying the dynamics and friction parameters, and manually setting multiple detection thresholds require considerable effort, making them difficult to be replicated for mass-produced industrial robots. There may also still exist unmodeled effects or uncertainties in the dynamics even after proper identification, e.g., backlash, elasticity. This dissertation presents a total of four learning-based collision detection methods for robot manipulators as a means of sidestepping some of the implementation difficulties of pure model-based methods and compensating for uncertain dynamic effects. Two methods use supervised learning algorithms โ€“ support vector machine regression and a one-dimensional convolutional neural network-based โ€“ that require both the collision and collision-free motion data for training. The other two methods are based on unsupervised anomaly detection algorithms โ€“ a one-class support vector machine and an autoencoder-based โ€“ that require only the collision-free motion data for training. Only the motor current measurements together with a robot dynamics model are required while no additional external sensors, friction modeling, or manual tuning of multiple detection thresholds are needed. We first describe the robot collision dataset collected with a six-dof collaborative robot manipulator, which is used for training and validating our supervised and unsupervised detection methods. The collision scenarios we consider are hard collisions, soft collisions, and collision-free, where both hard and soft collisions are treated in the same manner as just collisions. The test dataset for detection performance verification includes a total of 787 collisions and 62.4 minutes of collision-free motions, all collected while the robot is executing random point-to-point six-joint motions. During data collection, three types of payloads are attached to the end-effector: no payload, 3.3 kg payload, and 5.0 kg payload. Then the detection performance of our supervised detection methods is experimentally verified with the collected test dataset. Results demonstrate that our supervised detection methods can accurately detect a wide range of hard and soft collisions in real-time using a light network, compensating for uncertainties in the model parameters as well as unmodeled effects like friction, measurement noise, backlash, and deformations. Moreover, the SVMR-based method requires only one constant detection threshold to be tuned while the 1-D CNN-based method requires only one output filter parameter to be tuned, both of which allow intuitive sensitivity tuning. Furthermore, the generalization capability of our supervised detection methods is experimentally verified with a set of simulation experiments. Finally, our unsupervised detection methods are also validated for the same test dataset; the detection performance and the generalization capability are verified. The experimental results show that our unsupervised detection methods are also able to robustly detect a variety of hard and soft collisions in real-time with very light computation and with only one constant detection threshold required to be tuned, validating that uncertain dynamic effects including the unmodeled friction can be successfully compensated also with unsupervised learning. Although our supervised detection methods show better detection performance, our unsupervised detection methods are more practical for mass-produced industrial robots since they require only the data for collision-free motions for training, and the knowledge of every possible type of collision that can occur is not required.1 Introduction 1 1.1 Model-Free Methods 2 1.2 Model-Based Methods 2 1.3 Learning-Based Methods 4 1.3.1 Using Supervised Learning Algorithms 5 1.3.2 Using Unsupervised Learning Algorithms 6 1.4 Contributions of This Dissertation 7 1.4.1 Supervised Learning-Based Model-Compensating Detection 7 1.4.2 Unsupervised Learning-Based Model-Compensating Detection 8 1.4.3 Comparison with Existing Detection Methods 9 1.5 Organization of This Dissertation 14 2 Preliminaries 17 2.1 Introduction 17 2.2 Robot Dynamics 17 2.3 Momentum Observer-Based Collision Detection 19 2.4 Supervised Learning Algorithms 21 2.4.1 Support Vector Machine Regression 21 2.4.2 One-Dimensional Convolutional Neural Network 23 2.5 Unsupervised Anomaly Detection 25 2.6 One-Class Support Vector Machine 26 2.7 Autoencoder-Based Anomaly Detection 28 2.7.1 Autoencoder Network Architecture and Training 28 2.7.2 Anomaly Detection Using Autoencoders 29 3 Robot Collision Data 31 3.1 Introduction 31 3.2 True Collision Index Labeling 31 3.3 Collision Scenarios 35 3.4 Monitoring Signal 36 3.5 Signal Normalization and Sampling 37 3.6 Test Data for Detection Performance Verification 39 4 Supervised Learning-Based Model-Compensating Detection 43 4.1 Introduction 43 4.2 SVMR-Based Collision Detection 44 4.2.1 Input Feature Vector Design 44 4.2.2 SVMR Training 45 4.2.3 Collision Detection Sensitivity Adjustment 46 4.3 1-D CNN-Based Collision Detection 50 4.3.1 Network Input Design 50 4.3.2 Network Architecture and Training 50 4.3.3 An Output Filtering Method to Reduce False Alarms 53 4.4 Collision Detection Performance Criteria 54 4.4.1 Area Under the Precision-Recall Curve (PRAUC) 54 4.4.2 Detection Delay and Number of Detection Failures 54 4.5 Collision Detection Performance Analysis 56 4.5.1 Global Performance with Varying Thresholds 56 4.5.2 Detection Delay and Number of Detection Failures 57 4.5.3 Real-Time Inference 60 4.6 Generalization Capability Analysis 60 4.6.1 Generalization to Small Perturbations 60 4.6.2 Generalization to an Unseen Payload 62 5 Unsupervised Learning-Based Model-Compensating Detection 67 5.1 Introduction 67 5.2 OC-SVM-Based Collision Detection 68 5.2.1 Input Feature Vector 68 5.2.2 OC-SVM Training 70 5.2.3 Collision Detection with the Trained OC-SVM 70 5.3 Autoencoder-Based Collision Detection 70 5.3.1 Network Input and Output 71 5.3.2 Network Architecture and Training 71 5.3.3 Collision Detection with the Trained Autoencoder 72 5.4 Collision Detection Performance Analysis 74 5.4.1 Global Performance with Varying Thresholds 75 5.4.2 Detection Delay and Number of Detection Failures 75 5.4.3 Comparison with Supervised Learning-Based Methods 80 5.4.4 Real-Time Inference 83 5.5 Generalization Capability Analysis 83 5.5.1 Generalization to Small Perturbations 84 5.5.2 Generalization to an Unseen Payload 85 6 Conclusion 89 6.1 Summary and Discussion 89 6.2 Future Work 93 A Appendix 95 A.1 SVM-Based Classification of Detected Collisions 95 A.2 Direct Estimation-Based Detection Methods 97 A.3 Model-Independent Supervised Detection Methods 101 A.4 Generalization to Large Changes in the Dynamics Model 102 Bibliography 106 Abstract 112๋ฐ•

    Collision Detection and Contact Point Estimation Using Virtual Joint Torque Sensing Applied to a Cobot

    Get PDF
    In physical human-robot interaction (pHRI) it is essential to reliably estimate and localize contact forces between the robot and the environment. In this paper, a complete contact detection, isolation, and reaction scheme is presented and tested on a new 6-dof industrial collaborative robot. We combine two popular methods, based on monitoring energy and generalized momentum, to detect and isolate collisions on the whole robot body in a more robust way. The experimental results show the effectiveness of our implementation on the LARA5 cobot, that only relies on motor current and joint encoder measurements. For validation purposes, contact forces are also measured using an external GTE CoboSafe sensor. After a successful collision detection, the contact point location is isolated using a combination of the residual method based on the generalized momentum with a contact particle filter (CPF) scheme. We show for the first time a successful implementation of such combination on a real robot, without relying on joint torque sensor measurements

    NASA Center for Intelligent Robotic Systems for Space Exploration

    Get PDF
    NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE

    Motion Planning for the On-orbit Grasping of a Non-cooperative Target Satellite with Collision Avoidance

    Get PDF
    A method for grasping a tumbling noncooperative target is presented, which is based on nonlinear optimization and collision avoidance. Motion constraints on the robot joints as well as on the end-effector forces are considered. Cost functions of interest address the robustness of the planned solutions during the tracking phase as well as actuation energy. The method is applied in simulation to different operational scenarios
    • โ€ฆ
    corecore