816 research outputs found

    Towards Assistive Feeding with a General-Purpose Mobile Manipulator

    Get PDF
    General-purpose mobile manipulators have the potential to serve as a versatile form of assistive technology. However, their complexity creates challenges, including the risk of being too difficult to use. We present a proof-of-concept robotic system for assistive feeding that consists of a Willow Garage PR2, a high-level web-based interface, and specialized autonomous behaviors for scooping and feeding yogurt. As a step towards use by people with disabilities, we evaluated our system with 5 able-bodied participants. All 5 successfully ate yogurt using the system and reported high rates of success for the system's autonomous behaviors. Also, Henry Evans, a person with severe quadriplegia, operated the system remotely to feed an able-bodied person. In general, people who operated the system reported that it was easy to use, including Henry. The feeding system also incorporates corrective actions designed to be triggered either autonomously or by the user. In an offline evaluation using data collected with the feeding system, a new version of our multimodal anomaly detection system outperformed prior versions.Comment: This short 4-page paper was accepted and presented as a poster on May. 16, 2016 in ICRA 2016 workshop on 'Human-Robot Interfaces for Enhanced Physical Interactions' organized by Arash Ajoudani, Barkan Ugurlu, Panagiotis Artemiadis, Jun Morimoto. It was peer reviewed by one reviewe

    Investigating pre-touch sensing to predict grip success in compliant grippers using machine learning techniques

    Get PDF
    This work explores the application of pre-touch sensing to a compliant gripper in order to navigate the last few centimeters while grasping fruit in an occluded, cluttered environment. Machine learning was used in conjunction with pre-touch sensors to provide qualitative feedback about the success of the gripper in picking the target fruit prior to contact. Three compliant grippers were each designed to pick a specific fruit (miracle berries, cherry tomatoes and small figs) without damaging them. These grippers were designed to be mounted on the hybrid soft-rigid arm of a mobile field robot. An IR reflectance, time of flight and color sensor were used as pre-touch sensors and arranged on the gripper in various combinations to explore the contribution of each sensor. The gripper-sensor system was trained by positioning it relative to a dummy fruit using a 6 DOF arm and gripping the target. Using the training data, five machine learning methods were explored: nearest neighbor, decision trees, support vector machines, multi-layer perceptrons and a naive Bayes classifier. The various sensor configuration-machine learning combinations were tested and evaluated based on their ability to predict grip success. Additional training was conducted to demonstrate the ability to differentiate fruit from foreign matter (e.g. leaves) that are in the gripper opening. Time of flight sensors using nearest neighbor and support vector machines along with the set of all three sensors using support vector machines and multi-layer perceptrons showed the highest prediction precision (= 90%) with the color sensor playing a key role in detecting foreign objects. The machine learning methods were similar in their ability to predict grip success with nearest neighbor showing the best overall results, while sensor โ€˜richnessโ€™ play an important role in differentiating the sensors with the three sensor combination showing the best results

    ์ง€๋„ ๋ฐ ๋น„์ง€๋„ ํ•™์Šต์„ ์ด์šฉํ•œ ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ ์ถฉ๋Œ ๊ฐ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2022.2. ๋ฐ•์ข…์šฐ.์‚ฌ๋žŒ๊ณผ ๊ณต์œ ๋œ ๊ตฌ์กฐํ™”๋˜์ง€ ์•Š์€ ๋™์  ํ™˜๊ฒฝ์—์„œ ์ž‘๋™ํ•˜๋Š” ํ˜‘์—… ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋Š” ๋‚ ์นด๋กœ์šด ์ถฉ๋Œ(๊ฒฝ์„ฑ ์ถฉ๋Œ)์—์„œ ๋” ๊ธด ์ง€์† ์‹œ๊ฐ„์˜ ๋ฐ€๊ณ  ๋‹น๊ธฐ๋Š” ๋™์ž‘(์—ฐ์„ฑ ์ถฉ๋Œ)์— ์ด๋ฅด๊ธฐ๊นŒ์ง€์˜ ๋‹ค์–‘ํ•œ ์ถฉ๋Œ์„ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•ด์•ผ ํ•œ๋‹ค. ๋ชจํ„ฐ ์ „๋ฅ˜ ์ธก์ •๊ฐ’์„ ์ด์šฉํ•ด ์™ธ๋ถ€ ์กฐ์ธํŠธ ํ† ํฌ๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋™์—ญํ•™ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, ์ •ํ™•ํ•œ ๋งˆ์ฐฐ ํŒŒ๋ผ๋ฏธํ„ฐ ๋ชจ๋ธ๋ง ๋ฐ ์‹๋ณ„๊ณผ ๊ฐ™์€ ๋ชจํ„ฐ ๋งˆ์ฐฐ์— ๋Œ€ํ•œ ์ ์ ˆํ•œ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ ์šฉํ•˜๋ฉด ๋งค์šฐ ํšจ๊ณผ์ ์ด์ง€๋งŒ, ๋™์—ญํ•™๊ณผ ๋งˆ์ฐฐ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ชจ๋ธ๋ง ๋ฐ ์‹๋ณ„ํ•˜๊ณ  ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์„ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•˜๋Š” ๋ฐ์—๋Š” ์ƒ๋‹นํ•œ ๋…ธ๋ ฅ์ด ํ•„์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ๋˜๋Š” ์‚ฐ์—…์šฉ ๋กœ๋ด‡์— ์ด๋ฅผ ์ ์šฉํ•˜๊ธฐ๋Š” ์–ด๋ ต๋‹ค. ๋˜ํ•œ ์ ์ ˆํ•œ ์‹๋ณ„ ํ›„์—๋„ ๋™์—ญํ•™์— ๋ฐฑ๋ž˜์‹œ, ํƒ„์„ฑ ๋“ฑ ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ํšจ๊ณผ๋‚˜ ๋ถˆํ™•์‹ค์„ฑ์ด ์—ฌ์ „ํžˆ ์กด์žฌํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ˆœ์ˆ˜ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์˜ ๊ตฌํ˜„ ์–ด๋ ค์›€์„ ํ”ผํ•˜๊ณ  ๋ถˆํ™•์‹คํ•œ ๋™์—ญํ•™์  ํšจ๊ณผ๋ฅผ ๋ณด์ƒํ•˜๋Š” ์ˆ˜๋‹จ์œผ๋กœ ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์œ„ํ•œ ์ด ๋„ค ๊ฐ€์ง€์˜ ํ•™์Šต ๊ธฐ๋ฐ˜ ์ถฉ๋Œ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋‘ ๊ฐœ์˜ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ์ถฉ๋Œ ๋ฐ ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ชจ๋‘ ํ•„์š”ํ•œ ์ง€๋„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜(์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹  ํšŒ๊ท€, ์ผ์ฐจ์› ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜)์„ ์‚ฌ์šฉํ•˜๋ฉฐ ๋‚˜๋จธ์ง€ ๋‘ ๊ฐœ์˜ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋Š” ๋น„์ง€๋„ ์ด์ƒ์น˜ ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜(๋‹จ์ผ ํด๋ž˜์Šค ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹ , ์˜คํ† ์ธ์ฝ”๋” ๊ธฐ๋ฐ˜)์— ๊ธฐ๋ฐ˜ํ•œ๋‹ค. ๋กœ๋ด‡ ๋™์—ญํ•™ ๋ชจ๋ธ๊ณผ ๋ชจํ„ฐ ์ „๋ฅ˜ ์ธก์ •๊ฐ’๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋ฉฐ ์ถ”๊ฐ€์ ์ธ ์™ธ๋ถ€ ์„ผ์„œ๋‚˜ ๋งˆ์ฐฐ ๋ชจ๋ธ๋ง, ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์ˆ˜๋™ ์กฐ์ •์€ ํ•„์š”ํ•˜์ง€ ์•Š๋‹ค. ๋จผ์ € ์ง€๋„ ๋ฐ ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์„ ํ•™์Šต์‹œํ‚ค๊ณ  ๊ฒ€์ฆํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š”, 6์ž์œ ๋„ ํ˜‘์—… ๋กœ๋ด‡ ๋จธ๋‹ˆํ“ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ์ˆ˜์ง‘๋œ ๋กœ๋ด‡ ์ถฉ๋Œ ๋ฐ์ดํ„ฐ๋ฅผ ์„ค๋ช…ํ•œ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ๊ณ ๋ คํ•˜๋Š” ์ถฉ๋Œ ์‹œ๋‚˜๋ฆฌ์˜ค๋Š” ๊ฒฝ์„ฑ ์ถฉ๋Œ, ์—ฐ์„ฑ ์ถฉ๋Œ, ๋น„์ถฉ๋Œ ๋™์ž‘์œผ๋กœ, ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์€ ๋ชจ๋‘ ๋™์ผํ•˜๊ฒŒ ์ถฉ๋Œ๋กœ ๊ฐ„์ฃผํ•œ๋‹ค. ๊ฐ์ง€ ์„ฑ๋Šฅ ๊ฒ€์ฆ์„ ์œ„ํ•œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋Š” ์ด 787๊ฑด์˜ ์ถฉ๋Œ๊ณผ 62.4๋ถ„์˜ ๋น„์ถฉ๋Œ ๋™์ž‘์œผ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์œผ๋ฉฐ, ์ด๋Š” ๋กœ๋ด‡์ด ๋žœ๋ค ์ ๋Œ€์  6๊ด€์ ˆ ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์•ˆ ์ˆ˜์ง‘๋œ๋‹ค. ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์ค‘ ๋กœ๋ด‡์˜ ๋๋‹จ์—๋Š” ๋ฏธ๋ถ€์ฐฉ, 3.3 kg, 5.0 kg์˜ ์„ธ ๊ฐ€์ง€ ์œ ํ˜•์˜ ํŽ˜์ด๋กœ๋“œ๋ฅผ ๋ถ€์ฐฉํ•œ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ˆ˜์ง‘๋œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐ์ง€ ์„ฑ๋Šฅ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์ด ๊ฐ€๋ฒผ์šด ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•ด ๊ด‘๋ฒ”์œ„ํ•œ ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ •ํ™•ํ•˜๊ฒŒ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๋ถˆํ™•์‹ค์„ฑ๊ณผ ์ธก์ • ๋…ธ์ด์ฆˆ, ๋ฐฑ๋ž˜์‹œ, ๋ณ€ํ˜• ๋“ฑ ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ํšจ๊ณผ๊นŒ์ง€ ๋ณด์ƒ๋จ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹  ํšŒ๊ท€ ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ํ•˜๋‚˜์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ ํ•„์š”ํ•˜๋ฉฐ ์ผ์ฐจ์› ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ธฐ๋ฐ˜ ๋ฐฉ๋ฒ•์€ ํ•˜๋‚˜์˜ ์•„์›ƒํ’‹ ํ•„ํ„ฐ ํŒŒ๋ผ๋ฏธํ„ฐ์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ ํ•„์š”ํ•œ๋ฐ, ๋‘ ๋ฐฉ๋ฒ• ๋ชจ๋‘ ์ง๊ด€์ ์ธ ๊ฐ๋„ ์กฐ์ •์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ๋‚˜์•„๊ฐ€ ์ผ๋ จ์˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜์„ ํ†ตํ•ด ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐ์ง€ ์„ฑ๋Šฅ๊ณผ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ ๋˜ํ•œ ๊ฒ€์ฆํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ• ๋˜ํ•œ ๊ฐ€๋ฒผ์šด ๊ณ„์‚ฐ๊ณผ ํ•˜๋‚˜์˜ ๊ฐ์ง€ ์ž„๊ณ„๊ฐ’์— ๋Œ€ํ•œ ์กฐ์ •๋งŒ์œผ๋กœ ๋‹ค์–‘ํ•œ ๊ฒฝ์„ฑ ๋ฐ ์—ฐ์„ฑ ์ถฉ๋Œ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๊ฐ•์ธํ•˜๊ฒŒ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ๋ง๋˜์ง€ ์•Š์€ ๋งˆ์ฐฐ์„ ํฌํ•จํ•œ ๋ถˆํ™•์‹คํ•œ ๋™์—ญํ•™์  ํšจ๊ณผ๋ฅผ ๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ๋„ ๋ณด์ƒํ•  ์ˆ˜ ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์ด ๋” ๋‚˜์€ ๊ฐ์ง€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ๋น„์ง€๋„ ๊ฐ์ง€ ๋ฐฉ๋ฒ•์€ ํ•™์Šต์„ ์œ„ํ•ด ๋น„์ถฉ๋Œ ๋™์ž‘ ๋ฐ์ดํ„ฐ๋งŒ์„ ํ•„์š”๋กœ ํ•˜๋ฉฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ์ถฉ๋Œ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ํ•„์š”๋กœ ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ๋˜๋Š” ์‚ฐ์—…์šฉ ๋กœ๋ด‡์— ๋” ์ ํ•ฉํ•˜๋‹ค.Collaborative robot manipulators operating in dynamic and unstructured environments shared with humans require fast and accurate detection of collisions, which can range from sharp impacts (hard collisions) to pulling and pushing motions of longer duration (soft collisions). When using dynamics model-based detection methods that estimate the external joint torque with motor current measurements, proper treatment for friction in the motors is required, such as accurate modeling and identification of friction parameters. Although highly effective when done correctly, modeling and identifying the dynamics and friction parameters, and manually setting multiple detection thresholds require considerable effort, making them difficult to be replicated for mass-produced industrial robots. There may also still exist unmodeled effects or uncertainties in the dynamics even after proper identification, e.g., backlash, elasticity. This dissertation presents a total of four learning-based collision detection methods for robot manipulators as a means of sidestepping some of the implementation difficulties of pure model-based methods and compensating for uncertain dynamic effects. Two methods use supervised learning algorithms โ€“ support vector machine regression and a one-dimensional convolutional neural network-based โ€“ that require both the collision and collision-free motion data for training. The other two methods are based on unsupervised anomaly detection algorithms โ€“ a one-class support vector machine and an autoencoder-based โ€“ that require only the collision-free motion data for training. Only the motor current measurements together with a robot dynamics model are required while no additional external sensors, friction modeling, or manual tuning of multiple detection thresholds are needed. We first describe the robot collision dataset collected with a six-dof collaborative robot manipulator, which is used for training and validating our supervised and unsupervised detection methods. The collision scenarios we consider are hard collisions, soft collisions, and collision-free, where both hard and soft collisions are treated in the same manner as just collisions. The test dataset for detection performance verification includes a total of 787 collisions and 62.4 minutes of collision-free motions, all collected while the robot is executing random point-to-point six-joint motions. During data collection, three types of payloads are attached to the end-effector: no payload, 3.3 kg payload, and 5.0 kg payload. Then the detection performance of our supervised detection methods is experimentally verified with the collected test dataset. Results demonstrate that our supervised detection methods can accurately detect a wide range of hard and soft collisions in real-time using a light network, compensating for uncertainties in the model parameters as well as unmodeled effects like friction, measurement noise, backlash, and deformations. Moreover, the SVMR-based method requires only one constant detection threshold to be tuned while the 1-D CNN-based method requires only one output filter parameter to be tuned, both of which allow intuitive sensitivity tuning. Furthermore, the generalization capability of our supervised detection methods is experimentally verified with a set of simulation experiments. Finally, our unsupervised detection methods are also validated for the same test dataset; the detection performance and the generalization capability are verified. The experimental results show that our unsupervised detection methods are also able to robustly detect a variety of hard and soft collisions in real-time with very light computation and with only one constant detection threshold required to be tuned, validating that uncertain dynamic effects including the unmodeled friction can be successfully compensated also with unsupervised learning. Although our supervised detection methods show better detection performance, our unsupervised detection methods are more practical for mass-produced industrial robots since they require only the data for collision-free motions for training, and the knowledge of every possible type of collision that can occur is not required.1 Introduction 1 1.1 Model-Free Methods 2 1.2 Model-Based Methods 2 1.3 Learning-Based Methods 4 1.3.1 Using Supervised Learning Algorithms 5 1.3.2 Using Unsupervised Learning Algorithms 6 1.4 Contributions of This Dissertation 7 1.4.1 Supervised Learning-Based Model-Compensating Detection 7 1.4.2 Unsupervised Learning-Based Model-Compensating Detection 8 1.4.3 Comparison with Existing Detection Methods 9 1.5 Organization of This Dissertation 14 2 Preliminaries 17 2.1 Introduction 17 2.2 Robot Dynamics 17 2.3 Momentum Observer-Based Collision Detection 19 2.4 Supervised Learning Algorithms 21 2.4.1 Support Vector Machine Regression 21 2.4.2 One-Dimensional Convolutional Neural Network 23 2.5 Unsupervised Anomaly Detection 25 2.6 One-Class Support Vector Machine 26 2.7 Autoencoder-Based Anomaly Detection 28 2.7.1 Autoencoder Network Architecture and Training 28 2.7.2 Anomaly Detection Using Autoencoders 29 3 Robot Collision Data 31 3.1 Introduction 31 3.2 True Collision Index Labeling 31 3.3 Collision Scenarios 35 3.4 Monitoring Signal 36 3.5 Signal Normalization and Sampling 37 3.6 Test Data for Detection Performance Verification 39 4 Supervised Learning-Based Model-Compensating Detection 43 4.1 Introduction 43 4.2 SVMR-Based Collision Detection 44 4.2.1 Input Feature Vector Design 44 4.2.2 SVMR Training 45 4.2.3 Collision Detection Sensitivity Adjustment 46 4.3 1-D CNN-Based Collision Detection 50 4.3.1 Network Input Design 50 4.3.2 Network Architecture and Training 50 4.3.3 An Output Filtering Method to Reduce False Alarms 53 4.4 Collision Detection Performance Criteria 54 4.4.1 Area Under the Precision-Recall Curve (PRAUC) 54 4.4.2 Detection Delay and Number of Detection Failures 54 4.5 Collision Detection Performance Analysis 56 4.5.1 Global Performance with Varying Thresholds 56 4.5.2 Detection Delay and Number of Detection Failures 57 4.5.3 Real-Time Inference 60 4.6 Generalization Capability Analysis 60 4.6.1 Generalization to Small Perturbations 60 4.6.2 Generalization to an Unseen Payload 62 5 Unsupervised Learning-Based Model-Compensating Detection 67 5.1 Introduction 67 5.2 OC-SVM-Based Collision Detection 68 5.2.1 Input Feature Vector 68 5.2.2 OC-SVM Training 70 5.2.3 Collision Detection with the Trained OC-SVM 70 5.3 Autoencoder-Based Collision Detection 70 5.3.1 Network Input and Output 71 5.3.2 Network Architecture and Training 71 5.3.3 Collision Detection with the Trained Autoencoder 72 5.4 Collision Detection Performance Analysis 74 5.4.1 Global Performance with Varying Thresholds 75 5.4.2 Detection Delay and Number of Detection Failures 75 5.4.3 Comparison with Supervised Learning-Based Methods 80 5.4.4 Real-Time Inference 83 5.5 Generalization Capability Analysis 83 5.5.1 Generalization to Small Perturbations 84 5.5.2 Generalization to an Unseen Payload 85 6 Conclusion 89 6.1 Summary and Discussion 89 6.2 Future Work 93 A Appendix 95 A.1 SVM-Based Classification of Detected Collisions 95 A.2 Direct Estimation-Based Detection Methods 97 A.3 Model-Independent Supervised Detection Methods 101 A.4 Generalization to Large Changes in the Dynamics Model 102 Bibliography 106 Abstract 112๋ฐ•

    Intelligent strategies for mobile robotics in laboratory automation

    Get PDF
    In this thesis a new intelligent framework is presented for the mobile robots in laboratory automation, which includes: a new multi-floor indoor navigation method is presented and an intelligent multi-floor path planning is proposed; a new signal filtering method is presented for the robots to forecast their indoor coordinates; a new human feature based strategy is proposed for the robot-human smart collision avoidance; a new robot power forecasting method is proposed to decide a distributed transportation task; a new blind approach is presented for the arm manipulations for the robots

    FBG-Based Triaxial Force Sensor Integrated with an Eccentrically Configured Imaging Probe for Endoluminal Optical Biopsy

    Full text link
    Accurate force sensing is important for endoluminal intervention in terms of both safety and lesion targeting. This paper develops an FBG-based force sensor for robotic bronchoscopy by configuring three FBG sensors at the lateral side of a conical substrate. It allows a large and eccentric inner lumen for the interventional instrument, enabling a flexible imaging probe inside to perform optical biopsy. The force sensor is embodied with a laser-profiled continuum robot and thermo drift is fully compensated by three temperature sensors integrated on the circumference surface of the sensor substrate. Different decoupling approaches are investigated, and nonlinear decoupling is adopted based on the cross-validation SVM and a Gaussian kernel function, achieving an accuracy of 10.58 mN, 14.57 mN and 26.32 mN along X, Y and Z axis, respectively. The tissue test is also investigated to further demonstrate the feasibility of the developed triaxial force senso
    • โ€ฆ
    corecore