9 research outputs found

    Robot-aided cloth classification using depth information and CNNs

    Get PDF
    The final publication is available at link.springer.comWe present a system to deal with the problem of classifying garments from a pile of clothes. This system uses a robot arm to extract a garment and show it to a depth camera. Using only depth images of a partial view of the garment as input, a deep convolutional neural network has been trained to classify different types of garments. The robot can rotate the garment along the vertical axis in order to provide different views of the garment to enlarge the prediction confidence and avoid confusions. In addition to obtaining very high classification scores, compared to previous approaches to cloth classification that match the sensed data against a database, our system provides a fast and occlusion-robust solution to the problem.Peer ReviewedPostprint (author's final draft

    Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting

    Get PDF
    This paper proposes a single-shot approach for recognising clothing categories from 2.5D features. We propose two visual features, BSP (B-Spline Patch) and TSD (Topology Spatial Distances) for this task. The local BSP features are encoded by LLC (Locality-constrained Linear Coding) and fused with three different global features. Our visual feature is robust to deformable shapes and our approach is able to recognise the category of unknown clothing in unconstrained and random configurations. We integrated the category recognition pipeline with a stereo vision system, clothing instance detection, and dual-arm manipulators to achieve an autonomous sorting system. To verify the performance of our proposed method, we build a high-resolution RGBD clothing dataset of 50 clothing items of 5 categories sampled in random configurations (a total of 2,100 clothing samples). Experimental results show that our approach is able to reach 83.2\% accuracy while classifying clothing items which were previously unseen during training. This advances beyond the previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach in an autonomous robot sorting system, in which the robot recognises a clothing item from an unconstrained pile, grasps it, and sorts it into a box according to its category. Our proposed sorting system achieves reasonable sorting success rates with single-shot perception.Comment: 9 pages, accepted by IROS201

    Recognising the Clothing Categories from Free-Configuration Using Gaussian-Process-Based Interactive Perception

    Get PDF
    In this paper, we propose a Gaussian Process- based interactive perception approach for recognising highly- wrinkled clothes. We have integrated this recognition method within a clothes sorting pipeline for the pre-washing stage of an autonomous laundering process. Our approach differs from reported clothing manipulation approaches by allowing the robot to update its perception confidence via numerous interactions with the garments. The classifiers predominantly reported in clothing perception (e.g. SVM, Random Forest) studies do not provide true classification probabilities, due to their inherent structure. In contrast, probabilistic classifiers (of which the Gaussian Process is a popular example) are able to provide predictive probabilities. In our approach, we employ a multi-class Gaussian Process classification using the Laplace approximation for posterior inference and optimising hyper-parameters via marginal likelihood maximisation. Our experimental results show that our approach is able to recognise unknown garments from highly-occluded and wrinkled con- figurations and demonstrates a substantial improvement over non-interactive perception approaches

    Garment manipulation dataset for robot learning by demonstration through a virtual reality framework

    Get PDF
    .Being able to teach complex capabilities, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled using learning from demonstration datasets. The few garment folding datasets available nowadays to the robotics research community are either gathered from human demonstrations or generated through simulation. The former have the huge problem of perceiving human action and transferring it to the dynamic control of the robot, while the latter requires coding human motion into the simulator in open loop, resulting in far-from-realistic movements. In this article, we present a reduced but very accurate dataset of human cloth folding demonstrations. The dataset is collected through a novel virtual reality (VR) framework we propose, based on Unity’s 3D platform and the use of a HTC Vive Pro system. The framework is capable of simulating very realistic garments while allowing users to interact with them, in real time, through handheld controllers. By doing so, and thanks to the immersive experience, our framework gets rid of the gap between the human and robot perception-action loop, while simplifying data capture and resulting in more realistic sampleThis work was developed in the context of the project CLOTHILDE (”CLOTH manIpulation Learning from DEmonstrations”) which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 741930) and is also supported by the BURG project PCI2019-103447 funded by MCIN/ AEI /10.13039/501100011033 and by the ”European Union”.Peer ReviewedPostprint (published version

    Regrasping and unfolding of garments using predictive thin shell modeling

    Full text link

    双腕アヌムロボットによる垃被芆䜜業に関する研究

    Get PDF
    本研究の目的は物䜓を垃で包む䜜業被芆䜜業をモデル化しロボットによる被芆䜜業を実珟させるこずである本論文では「目暙線」の抂念に基づいお物䜓を垃で包む䜜業被芆䜜業をモデル化するこずを提案したこれによりたず人間が倧たかな包み方を教瀺し次に垃ず物䜓の圢状から被芆䜜業を蚈画し最終的にロボットの動䜜を生成しロボットによる被芆䜜業を実珟した近幎工堎のロボット化が行われおいるがロボット化できない䜜業はただただ存圚しおいるそれらは人間にしか行えないような巧みで耇雑な䜜業あるいはロボットより人間の方が効率的にできおしたうような䜜業であるそのような䜜業の぀ずしお垃を扱う䜜業が挙げられる垃を扱う䜜業の䞭には垃単䜓だけでなく物䜓も䞀緒に取り扱っおいく被芆䜜業が倚く存圚しおいるしかしこの被芆䜜業をロボットに指瀺するための有効な䜜業モデルは確立されおいない先行研究ではロボットによる垃操䜜の蚘述方法ずしお点折り線や手先経路が甚いられおいるたたコンピュヌタグラフィクス分野では目暙線ずいう蚘述方法がありこれは被芆を衚珟するために甚いられおいる被芆䜜業をロボット化する䞊ではたず実䞖界のロボットのために汎甚的な被芆モデルずしお必芁ずなる物䜓ず垃の関係や䜜業手順をどのように蚘述すればいいのかずいう問題に盎面するこのような点を考慮し被芆䜜業に適した蚘述モデルを導入しなければならない次にそのような被芆のための䜜業蚘述を実際のロボットにどのように入力すればいいのかずいう問題がある煩雑な指瀺方法ではなく実空間䞊で人間が考えおいる被芆䜜業を盎感的にロボットに指瀺できるのが望たしい最埌にその䜜業蚘述から実際のロボットの動きをどのように生成すればよいのかずいう問題が珟れおくるロボットが被芆䜜業を達成するためには実際の手先軌道や干枉を回避するための動䜜を状況に合わせお生成しなければならない以䞊を螏たえお本研究ではロボットによる被芆䜜業の課題に取り組んだ具䜓的には以䞋の課題に぀いお取り組んだ・垃ず物䜓の関係を適切に衚す蚘述方法・盎感的な被芆手順の指瀺方法・ロボットの動䜜軌道の生成方法たず垃ず物䜓の関係を適切に衚す蚘述方法に぀いお怜蚎した本研究ではコンピュヌタグラフィクス分野で甚いられた目暙線ずいう蚘述方法を実空間のロボットに導入するこずを提案したこの目暙線は平面だけでなく曲面圢状ぞの指瀺が行いやすいそしお物䜓のどこを垃で包んでいくかずいう被芆の本質的な情報を自然に衚せる利点を持぀その䞭では凹凞が存圚するような物䜓に察しおも被芆を行う堎合がありその凹凞を適切に凊理しお䜜業を蚘述する必芁があるそこで物䜓の埋めるべき凹郚ず埋めるべきでない凹郚分を考慮し凹凞ぞ適切な目暙線指瀺を行うための局所凞ずいう抂念及び局所凞生成方法を提案した次に盎感的な被芆手順の指瀺方法に぀いお怜蚎した本研究では人間の倧たかな包む指瀺ず被芆の関係を考え物䜓ず垃のどこを重ね合わせるかずいう人間の被芆の意図を目暙線ずしお入力する方法を提案した本研究は䜜業指瀺を行う手の正確な次元的な軌跡ではなく手の軌跡ずその軌跡が通過しおいく物䜓衚面の関係に泚目したそしおデプスセンサずモヌションキャプチャセンサを組合せた教瀺デバむスを甚いお人間の被芆の意図を抜出したその䞭では指瀺䞭の手振れの圱響を小さくするための目暙線逆走防止凊理手法ずスムヌゞングず間匕き凊理を合わせた補正凊理手法を提案した最埌にロボットの動䜜軌道の生成方法に぀いお怜蚎した本研究では目暙線ず把持点から垃の動きを衚す手先経路を生成する方法ずその手先経路を実行するためのロボット動䜜の生成方法を提案した実際のロボットを動かすためには目暙線だけでなく手先経路や動䜜指什が必芁であり可動域や物䜓ずの干枉を考慮し右手ず巊手を甚いた垃の持ち替えや持ち盎しを行わなければならないこれらの情報を生成する䞊で目暙線が被芆の本質的な情報を保持しおいるそのため手先経路・動䜜指什は自動的に生成可胜である動䜜生成手法の䞭では各操䜜の垃ぞの重力の圱響動䜜ステップ数やロボットず垃の䜍眮関係を考慮した確実性を求めそれを基に生成された動䜜遷移グラフを甚いお最適な持ち替えや持ち盎し操䜜の組み合わせを蚈画する方法を提案した以䞊本研究では物䜓を垃で包むずいう被芆䜜業に぀いおロボット化のための枠組みを提案したさらに各課題に察する提案方法を統合し䞀連の被芆䜜業システムずしお実装したこれにより実際に人間の倧たかな指瀺から目暙線を甚いお垃ず物䜓の関係を蚘述しそこから垃の動きを衚す手先経路状況に合わせた最適なロボット動䜜を生成できるようになりロボットによる被芆䜜業が実珟した電気通信倧孊201

    A New Approach to Clothing Classification using Mid-Level Layers

    No full text
    Abstract — We present a novel approach for classifying items from a pile of laundry. The classification procedure exploits color, texture, shape, and edge information from 2D and 3D local and global information for each article of clothing using a Kinect sensor. The key contribution of this paper is a novel method of classifying clothing which we term L-M-H, more specifically L-C-S-H using characteristics and selection masks. Essentially, the method decomposes the problem into high (H), low (L) and multiple mid-level (characteristics(C), selection masks(S)) layers and produces “local ” solutions to solve the global classification problem. Experiments demonstrate the ability of the system to efficiently classify and label into one of three categories (shirts, socks, or dresses). These results show that, on average, the classification rates, using this new approach with mid-level layers, achieve a true positive rate of 90%. I
    corecore