9 research outputs found
Robot-aided cloth classification using depth information and CNNs
The final publication is available at link.springer.comWe present a system to deal with the problem of classifying garments from a pile of clothes. This system uses a robot arm to extract a garment and show it to a depth camera. Using only depth images of a partial view of the garment as input, a deep convolutional neural network has been trained to classify different types of garments.
The robot can rotate the garment along the vertical axis in order to provide different views of the garment to enlarge the prediction confidence and avoid confusions. In addition to obtaining very high classification scores, compared to previous approaches to cloth classification that match the sensed data against a database, our system provides a fast and occlusion-robust solution to the problem.Peer ReviewedPostprint (author's final draft
Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting
This paper proposes a single-shot approach for recognising clothing
categories from 2.5D features. We propose two visual features, BSP (B-Spline
Patch) and TSD (Topology Spatial Distances) for this task. The local BSP
features are encoded by LLC (Locality-constrained Linear Coding) and fused with
three different global features. Our visual feature is robust to deformable
shapes and our approach is able to recognise the category of unknown clothing
in unconstrained and random configurations. We integrated the category
recognition pipeline with a stereo vision system, clothing instance detection,
and dual-arm manipulators to achieve an autonomous sorting system. To verify
the performance of our proposed method, we build a high-resolution RGBD
clothing dataset of 50 clothing items of 5 categories sampled in random
configurations (a total of 2,100 clothing samples). Experimental results show
that our approach is able to reach 83.2\% accuracy while classifying clothing
items which were previously unseen during training. This advances beyond the
previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach
in an autonomous robot sorting system, in which the robot recognises a clothing
item from an unconstrained pile, grasps it, and sorts it into a box according
to its category. Our proposed sorting system achieves reasonable sorting
success rates with single-shot perception.Comment: 9 pages, accepted by IROS201
Recognising the Clothing Categories from Free-Configuration Using Gaussian-Process-Based Interactive Perception
In this paper, we propose a Gaussian Process- based interactive perception approach for recognising highly- wrinkled clothes. We have integrated this recognition method within a clothes sorting pipeline for the pre-washing stage of an autonomous laundering process. Our approach differs from reported clothing manipulation approaches by allowing the robot to update its perception confidence via numerous interactions with the garments. The classifiers predominantly reported in clothing perception (e.g. SVM, Random Forest) studies do not provide true classification probabilities, due to their inherent structure. In contrast, probabilistic classifiers (of which the Gaussian Process is a popular example) are able to provide predictive probabilities. In our approach, we employ a multi-class Gaussian Process classification using the Laplace approximation for posterior inference and optimising hyper-parameters via marginal likelihood maximisation. Our experimental results show that our approach is able to recognise unknown garments from highly-occluded and wrinkled con- figurations and demonstrates a substantial improvement over non-interactive perception approaches
Garment manipulation dataset for robot learning by demonstration through a virtual reality framework
.Being able to teach complex capabilities, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled using learning from demonstration datasets. The few garment folding datasets available nowadays to the robotics research community are either gathered from human demonstrations or generated through simulation. The former have the huge problem of perceiving human action and transferring it to the dynamic control of the robot, while the latter requires coding human motion into the simulator in open loop, resulting in far-from-realistic movements. In this article, we present a reduced but very accurate dataset of human cloth folding demonstrations. The dataset is collected through a novel virtual reality (VR) framework we propose, based on Unityâs 3D platform and the use of a HTC Vive Pro system. The framework is capable of simulating very realistic garments while allowing users to interact with them, in real time, through handheld controllers. By doing so, and thanks to the immersive experience, our framework gets rid of the gap between the human and robot perception-action loop, while simplifying data capture and resulting in more realistic sampleThis work was developed in the context of the project CLOTHILDE (âCLOTH manIpulation Learning from DEmonstrationsâ) which has received funding from the European Research Council (ERC) under the European Unionâs Horizon 2020 research and innovation program (grant agreement No. 741930) and is also supported by the BURG project PCI2019-103447 funded by MCIN/ AEI /10.13039/501100011033 and by the âEuropean Unionâ.Peer ReviewedPostprint (published version
åè ã¢ãŒã ããããã«ããåžè¢«èŠäœæ¥ã«é¢ããç 究
æ¬ç 究ã®ç®çã¯ïŒç©äœãåžã§å
ãäœæ¥ïŒè¢«èŠäœæ¥ïŒãã¢ãã«åãïŒããããã«ãã被èŠäœæ¥ãå®çŸãããããšã§ããïŒæ¬è«æã§ã¯ãç®æšç·ãã®æŠå¿µã«åºã¥ããŠç©äœãåžã§å
ãäœæ¥ïŒè¢«èŠäœæ¥ïŒãã¢ãã«åããããšãææ¡ããïŒããã«ããïŒãŸã人éã倧ãŸããªå
ã¿æ¹ãæ瀺ãïŒæ¬¡ã«åžãšç©äœã®åœ¢ç¶ãã被èŠäœæ¥ãèšç»ãïŒæçµçã«ããããã®åäœãçæãïŒããããã«ãã被èŠäœæ¥ãå®çŸããïŒè¿å¹ŽïŒå·¥å Žã®ããããåãè¡ãããŠãããïŒããããåã§ããªãäœæ¥ã¯ãŸã ãŸã ååšããŠããïŒãããã¯ïŒäººéã«ããè¡ããªããããªå·§ã¿ã§è€éãªäœæ¥ïŒãããã¯ïŒãããããã人éã®æ¹ãå¹ççã«ã§ããŠããŸããããªäœæ¥ã§ããïŒãã®ãããªäœæ¥ã®ïŒã€ãšããŠïŒåžãæ±ãäœæ¥ãæããããïŒåžãæ±ãäœæ¥ã®äžã«ã¯ïŒåžåäœã ãã§ãªãïŒç©äœãäžç·ã«åãæ±ã£ãŠãã被èŠäœæ¥ãå€ãååšããŠããïŒãããïŒãã®è¢«èŠäœæ¥ãããããã«æ瀺ããããã®æå¹ãªäœæ¥ã¢ãã«ã¯ç¢ºç«ãããŠããªãïŒå
è¡ç 究ã§ã¯ïŒããããã«ããåžæäœã®èšè¿°æ¹æ³ãšããŠïŒç¹ïŒæãç·ãæå
çµè·¯ãçšããããŠããïŒãŸãïŒã³ã³ãã¥ãŒã¿ã°ã©ãã£ã¯ã¹åéã§ã¯ç®æšç·ãšããèšè¿°æ¹æ³ãããïŒããã¯è¢«èŠãè¡šçŸããããã«çšããããŠããïŒè¢«èŠäœæ¥ãããããåããäžã§ã¯ïŒãŸãïŒå®äžçã®ããããã®ããã«ïŒæ±çšçãªè¢«èŠã¢ãã«ãšããŠå¿
èŠãšãªãç©äœãšåžã®é¢ä¿ãäœæ¥æé ãïŒã©ã®ããã«èšè¿°ããã°ããã®ããšããåé¡ã«çŽé¢ããïŒãã®ãããªç¹ãèæ
®ãïŒè¢«èŠäœæ¥ã«é©ããèšè¿°ã¢ãã«ãå°å
¥ããªããã°ãªããªãïŒæ¬¡ã«ïŒãã®ãããªè¢«èŠã®ããã®äœæ¥èšè¿°ãïŒå®éã®ããããã«ã©ã®ããã«å
¥åããã°ããã®ããšããåé¡ãããïŒç
©éãªæ瀺æ¹æ³ã§ã¯ãªãïŒå®ç©ºéäžã§äººéãèããŠãã被èŠäœæ¥ãïŒçŽæçã«ããããã«æ瀺ã§ããã®ãæãŸããïŒæåŸã«ïŒãã®äœæ¥èšè¿°ããå®éã®ããããã®åããã©ã®ããã«çæããã°ããã®ããšããåé¡ãçŸããŠããïŒããããã被èŠäœæ¥ãéæããããã«ã¯ïŒå®éã®æå
è»éãå¹²æžãåé¿ããããã®åäœãïŒç¶æ³ã«åãããŠçæããªããã°ãªããªãïŒä»¥äžãèžãŸããŠïŒæ¬ç 究ã§ã¯ããããã«ãã被èŠäœæ¥ã®èª²é¡ã«åãçµãã ïŒå
·äœçã«ã¯ä»¥äžã®èª²é¡ã«ã€ããŠåãçµãã ïŒã»åžãšç©äœã®é¢ä¿ãé©åã«è¡šãèšè¿°æ¹æ³ã»çŽæçãªè¢«èŠæé ã®æ瀺æ¹æ³ã»ããããã®åäœè»éã®çææ¹æ³ãŸãïŒåžãšç©äœã®é¢ä¿ãé©åã«è¡šãèšè¿°æ¹æ³ã«ã€ããŠæ€èšããïŒæ¬ç 究ã§ã¯ïŒã³ã³ãã¥ãŒã¿ã°ã©ãã£ã¯ã¹åéã§çšããããç®æšç·ãšããèšè¿°æ¹æ³ãïŒå®ç©ºéã®ããããã«å°å
¥ããããšãææ¡ããïŒãã®ç®æšç·ã¯å¹³é¢ã ãã§ãªãæ²é¢åœ¢ç¶ãžã®æ瀺ãè¡ããããïŒãããŠïŒç©äœã®ã©ããåžã§å
ãã§ããããšãã被èŠã®æ¬è³ªçãªæ
å ±ãèªç¶ã«è¡šããå©ç¹ãæã€ïŒãã®äžã§ã¯ïŒå¹åžãååšãããããªç©äœã«å¯ŸããŠã被èŠãè¡ãå ŽåãããïŒãã®å¹åžãé©åã«åŠçããŠïŒäœæ¥ãèšè¿°ããå¿
èŠãããïŒããã§ïŒç©äœã®åããã¹ãå¹éšãšåããã¹ãã§ãªãå¹éšåãèæ
®ãïŒå¹åžãžé©åãªç®æšç·æ瀺ãè¡ãããã®å±æåžãšããæŠå¿µïŒåã³å±æåžçææ¹æ³ãææ¡ããïŒæ¬¡ã«ïŒçŽæçãªè¢«èŠæé ã®æ瀺æ¹æ³ã«ã€ããŠæ€èšããïŒæ¬ç 究ã§ã¯ïŒäººéã®å€§ãŸããªå
ãæ瀺ãšè¢«èŠã®é¢ä¿ãèãïŒç©äœãšåžã®ã©ããéãåãããããšãã人éã®è¢«èŠã®æå³ãç®æšç·ãšããŠå
¥åããæ¹æ³ãææ¡ããïŒæ¬ç 究ã¯ïŒäœæ¥æ瀺ãè¡ãæã®æ£ç¢ºãªïŒæ¬¡å
çãªè»è·¡ã§ã¯ãªãïŒæã®è»è·¡ãšãã®è»è·¡ãééããŠããç©äœè¡šé¢ã®é¢ä¿ã«æ³šç®ããïŒãããŠïŒããã¹ã»ã³ãµãšã¢ãŒã·ã§ã³ãã£ããã£ã»ã³ãµãçµåããæ瀺ããã€ã¹ãçšããŠïŒäººéã®è¢«èŠã®æå³ãæœåºããïŒãã®äžã§ã¯ïŒæ瀺äžã®ææ¯ãã®åœ±é¿ãå°ããããããã®ç®æšç·éèµ°é²æ¢åŠçææ³ãšã¹ã ãŒãžã³ã°ãšéåŒãåŠçãåãããè£æ£åŠçææ³ãææ¡ããïŒæåŸã«ïŒããããã®åäœè»éã®çææ¹æ³ã«ã€ããŠæ€èšããïŒæ¬ç 究ã§ã¯ïŒç®æšç·ãšææç¹ããåžã®åããè¡šãæå
çµè·¯ãçæããæ¹æ³ãšïŒãã®æå
çµè·¯ãå®è¡ããããã®ããããåäœã®çææ¹æ³ãææ¡ããïŒå®éã®ãããããåããããã«ã¯ïŒç®æšç·ã ãã§ãªãïŒæå
çµè·¯ãåäœæ什ãå¿
èŠã§ããïŒå¯ååãç©äœãšã®å¹²æžãèæ
®ãïŒå³æãšå·Šæãçšããåžã®æã¡æ¿ããæã¡çŽããè¡ããªããã°ãªããªãïŒãããã®æ
å ±ãçæããäžã§ïŒç®æšç·ã被èŠã®æ¬è³ªçãªæ
å ±ãä¿æããŠããïŒãã®ããïŒæå
çµè·¯ã»åäœæ什ã¯èªåçã«çæå¯èœã§ããïŒåäœçæææ³ã®äžã§ã¯ïŒåæäœã®åžãžã®éåã®åœ±é¿ïŒåäœã¹ãããæ°ããããããšåžã®äœçœ®é¢ä¿ãèæ
®ãã確å®æ§ãæ±ãïŒãããåºã«çæãããåäœé·ç§»ã°ã©ããçšããŠïŒæé©ãªæã¡æ¿ããæã¡çŽãæäœã®çµã¿åãããèšç»ããæ¹æ³ãææ¡ããïŒä»¥äžïŒæ¬ç 究ã§ã¯ïŒç©äœãåžã§å
ããšãã被èŠäœæ¥ã«ã€ããŠïŒããããåã®ããã®æ çµã¿ãææ¡ããïŒããã«ïŒå課é¡ã«å¯Ÿããææ¡æ¹æ³ãçµ±åãïŒäžé£ã®è¢«èŠäœæ¥ã·ã¹ãã ãšããŠå®è£
ããïŒããã«ããïŒå®éã«äººéã®å€§ãŸããªæ瀺ããïŒç®æšç·ãçšããŠåžãšç©äœã®é¢ä¿ãèšè¿°ãïŒããããåžã®åããè¡šãæå
çµè·¯ïŒç¶æ³ã«åãããæé©ãªããããåäœãçæã§ããããã«ãªãããããã«ãã被èŠäœæ¥ãå®çŸããïŒé»æ°é信倧åŠ201
A New Approach to Clothing Classification using Mid-Level Layers
Abstract â We present a novel approach for classifying items from a pile of laundry. The classification procedure exploits color, texture, shape, and edge information from 2D and 3D local and global information for each article of clothing using a Kinect sensor. The key contribution of this paper is a novel method of classifying clothing which we term L-M-H, more specifically L-C-S-H using characteristics and selection masks. Essentially, the method decomposes the problem into high (H), low (L) and multiple mid-level (characteristics(C), selection masks(S)) layers and produces âlocal â solutions to solve the global classification problem. Experiments demonstrate the ability of the system to efficiently classify and label into one of three categories (shirts, socks, or dresses). These results show that, on average, the classification rates, using this new approach with mid-level layers, achieve a true positive rate of 90%. I