4,303 research outputs found
Active Clothing Material Perception using Tactile Sensing and Deep Learning
Humans represent and discriminate the objects in the same category using
their properties, and an intelligent robot should be able to do the same. In
this paper, we build a robot system that can autonomously perceive the object
properties through touch. We work on the common object category of clothing.
The robot moves under the guidance of an external Kinect sensor, and squeezes
the clothes with a GelSight tactile sensor, then it recognizes the 11
properties of the clothing according to the tactile data. Those properties
include the physical properties, like thickness, fuzziness, softness and
durability, and semantic properties, like wearing season and preferred washing
methods. We collect a dataset of 153 varied pieces of clothes, and conduct 6616
robot exploring iterations on them. To extract the useful information from the
high-dimensional sensory output, we applied Convolutional Neural Networks (CNN)
on the tactile data for recognizing the clothing properties, and on the Kinect
depth images for selecting exploration locations. Experiments show that using
the trained neural networks, the robot can autonomously explore the unknown
clothes and learn their properties. This work proposes a new framework for
active tactile perception system with vision-touch system, and has potential to
enable robots to help humans with varied clothing related housework.Comment: ICRA 2018 accepte
Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting
This paper proposes a single-shot approach for recognising clothing
categories from 2.5D features. We propose two visual features, BSP (B-Spline
Patch) and TSD (Topology Spatial Distances) for this task. The local BSP
features are encoded by LLC (Locality-constrained Linear Coding) and fused with
three different global features. Our visual feature is robust to deformable
shapes and our approach is able to recognise the category of unknown clothing
in unconstrained and random configurations. We integrated the category
recognition pipeline with a stereo vision system, clothing instance detection,
and dual-arm manipulators to achieve an autonomous sorting system. To verify
the performance of our proposed method, we build a high-resolution RGBD
clothing dataset of 50 clothing items of 5 categories sampled in random
configurations (a total of 2,100 clothing samples). Experimental results show
that our approach is able to reach 83.2\% accuracy while classifying clothing
items which were previously unseen during training. This advances beyond the
previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach
in an autonomous robot sorting system, in which the robot recognises a clothing
item from an unconstrained pile, grasps it, and sorts it into a box according
to its category. Our proposed sorting system achieves reasonable sorting
success rates with single-shot perception.Comment: 9 pages, accepted by IROS201
Textile Taxonomy and Classification Using Pulling and Twisting
Identification of textile properties is an important milestone toward
advanced robotic manipulation tasks that consider interaction with clothing
items such as assisted dressing, laundry folding, automated sewing, textile
recycling and reusing. Despite the abundance of work considering this class of
deformable objects, many open problems remain. These relate to the choice and
modelling of the sensory feedback as well as the control and planning of the
interaction and manipulation strategies. Most importantly, there is no
structured approach for studying and assessing different approaches that may
bridge the gap between the robotics community and textile production industry.
To this end, we outline a textile taxonomy considering fiber types and
production methods, commonly used in textile industry. We devise datasets
according to the taxonomy, and study how robotic actions, such as pulling and
twisting of the textile samples, can be used for the classification. We also
provide important insights from the perspective of visualization and
interpretability of the gathered data
Learning to Singulate Layers of Cloth using Tactile Feedback
Robotic manipulation of cloth has applications ranging from fabrics
manufacturing to handling blankets and laundry. Cloth manipulation is
challenging for robots largely due to their high degrees of freedom, complex
dynamics, and severe self-occlusions when in folded or crumpled configurations.
Prior work on robotic manipulation of cloth relies primarily on vision sensors
alone, which may pose challenges for fine-grained manipulation tasks such as
grasping a desired number of cloth layers from a stack of cloth. In this paper,
we propose to use tactile sensing for cloth manipulation; we attach a tactile
sensor (ReSkin) to one of the two fingertips of a Franka robot and train a
classifier to determine whether the robot is grasping a specific number of
cloth layers. During test-time experiments, the robot uses this classifier as
part of its policy to grasp one or two cloth layers using tactile feedback to
determine suitable grasping points. Experimental results over 180 physical
trials suggest that the proposed method outperforms baselines that do not use
tactile feedback and has better generalization to unseen cloth compared to
methods that use image classifiers. Code, data, and videos are available at
https://sites.google.com/view/reskin-cloth.Comment: IROS 2022. See https://sites.google.com/view/reskin-cloth for
supplementary materia
Active Tactile Sensing for Texture Perception in Robotic Systems
This thesis presents a comprehensive study of tactile sensing, particularly on the prob- lem of active texture perception. It includes a brief introduction to tactile sensing technology and the neural basis for tactile perception. It follows the literature review of textural percep- tion with tactile sensing. I propose a decoding and perception pipeline to tackle fine-texture classification/identification problems via active touching. Experiments are conducted using a 7DOF robotic arm with a finger-shaped tactile sensor mounted on the end-effector to per- form sliding/rubbing movements on multiple fabrics. Low-dimensional frequency features are extracted from the raw signals to form a perceptive feature space, where tactile signals are mapped and segregated into fabric classes. Fabric classes can be parameterized and sim- plified in the feature space using elliptical equations. Results from experiments of varied control parameters are compared and visualized to show that different exploratory move- ments have an apparent impact on the perceived tactile information. It implies the possibil- ity of optimising the robotic movements to improve the textural classification/identification performance
Wearable performance
This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment
Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment Features
Humans make extensive use of vision and touch as complementary senses, with
vision providing global information about the scene and touch measuring local
information during manipulation without suffering from occlusions. While prior
work demonstrates the efficacy of tactile sensing for precise manipulation of
deformables, they typically rely on supervised, human-labeled datasets. We
propose Self-Supervised Visuo-Tactile Pretraining (SSVTP), a framework for
learning multi-task visuo-tactile representations in a self-supervised manner
through cross-modal supervision. We design a mechanism that enables a robot to
autonomously collect precisely spatially-aligned visual and tactile image
pairs, then train visual and tactile encoders to embed these pairs into a
shared latent space using cross-modal contrastive loss. We apply this latent
space to downstream perception and control of deformable garments on flat
surfaces, and evaluate the flexibility of the learned representations without
fine-tuning on 5 tasks: feature classification, contact localization, anomaly
detection, feature search from a visual query (e.g., garment feature
localization under occlusion), and edge following along cloth edges. The
pretrained representations achieve a 73-100% success rate on these 5 tasks.Comment: RSS 2023, site: https://sites.google.com/berkeley.edu/ssvt
Visual Tactile Fusion Object Clustering
Object clustering, aiming at grouping similar objects into one cluster with
an unsupervised strategy, has been extensivelystudied among various data-driven
applications. However, most existing state-of-the-art object clustering methods
(e.g., single-view or multi-view clustering methods) only explore visual
information, while ignoring one of most important sensing modalities, i.e.,
tactile information which can help capture different object properties and
further boost the performance of object clustering task. To effectively benefit
both visual and tactile modalities for object clustering, in this paper, we
propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework
for visual-tactile fusion clustering. Specifically, deep matrix factorization
constrained by an under-complete Auto-Encoder-like architecture is employed to
jointly learn hierarchical expression of visual-tactile fusion data, and
preserve the local structure of data generating distribution of visual and
tactile modalities. Meanwhile, a graph regularizer is introduced to capture the
intrinsic relations of data samples within each modality. Furthermore, we
propose a modality-level consensus regularizer to effectively align thevisual
and tactile data in a common subspace in which the gap between visual and
tactile data is mitigated. For the model optimization, we present an efficient
alternating minimization strategy to solve our proposed model. Finally, we
conduct extensive experiments on public datasets to verify the effectiveness of
our framework.Comment: 8 pages, 5 figure
- …