6 research outputs found
"Touching to See" and "Seeing to Feel": Robotic Cross-modal Sensory Data Generation for Visual-Tactile Perception
The integration of visual-tactile stimulus is common while humans performing
daily tasks. In contrast, using unimodal visual or tactile perception limits
the perceivable dimensionality of a subject. However, it remains a challenge to
integrate the visual and tactile perception to facilitate robotic tasks. In
this paper, we propose a novel framework for the cross-modal sensory data
generation for visual and tactile perception. Taking texture perception as an
example, we apply conditional generative adversarial networks to generate
pseudo visual images or tactile outputs from data of the other modality.
Extensive experiments on the ViTac dataset of cloth textures show that the
proposed method can produce realistic outputs from other sensory inputs. We
adopt the structural similarity index to evaluate similarity of the generated
output and real data and results show that realistic data have been generated.
Classification evaluation has also been performed to show that the inclusion of
generated data can improve the perception performance. The proposed framework
has potential to expand datasets for classification tasks, generate sensory
outputs that are not easy to access, and also advance integrated visual-tactile
perception.Comment: 7 pages, IEEE International Conference on Robotics and Automation
201
ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception
Animals interact with the world through multimodal sensing inputs, especially vision and touch sensing in the case of humans interacting with our physical surroundings. In contrast, artificial systems usually rely on a single sensing modality, with distinct hardware and algorithmic approaches developed for each modality. For example, computer vision and tactile robotics are usually treated as distinct disciplines, with specialist knowledge required to make progress in each research field. Future robots, as embodied agents interacting with complex environments, should make best use of all available sensing modalities to perform their tasks
Touching a NeRF: leveraging neural radiance fields for tactile sensory data generation
Tactile perception is key for robotics applications such as manipulation.
However, tactile data collection is time-consuming, especially when compared to
vision. This limits the use of the tactile modality in machine learning solutions
in robotics. In this paper, we propose a generative model to simulate realistic
tactile sensory data for use in downstream tasks. Starting with easily-obtained
camera images, we train Neural Radiance Fields (NeRF) for objects of interest.
We then use NeRF-rendered RGB-D images as inputs to a conditional Generative
Adversarial Network model (cGAN) to generate tactile images from desired orientations. We evaluate the generated data quantitatively using the Structural Similarity Index and Mean Squared Error metrics, and also using a tactile classification
task both in simulation and in the real world. Results show that by augmenting
a manually collected dataset, the generated data is able to increase classification
accuracy by around 10%. In addition, we demonstrate that our model is able to
transfer from one tactile sensor to another with a small fine-tuning dataset
Fine-grained Haptics: Sensing and Actuating Haptic Primary Colours (force, vibration, and temperature)
This thesis discusses the development of a multimodal, fine-grained visual-haptic system for teleoperation and robotic applications. This system is primarily composed of two complementary components: an input device known as the HaptiTemp sensor (combines āHapticsā and āTemperatureā), which is a novel thermosensitive GelSight-like sensor, and an output device, an untethered multimodal finegrained
haptic glove.
The HaptiTemp sensor is a visuotactile sensor that can sense haptic primary colours known as force, vibration, and temperature. It has novel switchable UV markers that can be made visible using UV LEDs. The switchable markers feature
is a real novelty of the HaptiTemp because it can be used in the analysis of tactile information from gel deformation without impairing the ability to classify or recognise images. The use of switchable markers in the HaptiTemp sensor is the solution to the trade-off between marker density and capturing high-resolution images using one sensor. The HaptiTemp sensor can measure vibrations by counting the number of blobs or pulses detected per unit time using a blob detection algorithm. For the first time, temperature detection was incorporated into a GelSight-like sensor, making the HaptiTemp sensor a haptic primary colours sensor. The HaptiTemp sensor can also do rapid temperature sensing with a 643 ms response time for the 31Ā°C to 50Ā°C temperature range. This fast temperature response of the HaptiTemp sensor is comparable to the withdrawal reflex response in humans. This is the first time a sensor can trigger a sensory impulse that can mimic a human reflex in the robotic community. The HaptiTemp sensor can also do simultaneous temperature sensing and image classification using a machine vision cameraāthe OpenMV Cam H7 Plus. This capability of simultaneous sensing and image classification has not been reported or demonstrated by any tactile sensor. The HaptiTemp sensor can be used in teleoperation because it can communicate or transmit tactile analysis and image classification results using wireless communication. The HaptiTemp sensor is the closest thing to the human skin in tactile sensing, tactile pattern recognition, and rapid temperature response.
In order to feel what the HaptiTemp sensor is touching from a distance, a corresponding output device, an untethered multimodal haptic hand wearable, is developed to actuate the haptic primary colours sensed by the HaptiTemp sensor. This wearable can communicate wirelessly and has fine-grained cutaneous feedback to feel the edges or surfaces of the tactile images captured by the HaptiTemp sensor. This untethered multimodal haptic hand wearable has gradient kinesthetic force feedback that can restrict finger movements based on the force estimated by the HaptiTemp sensor. A retractable string from an ID badge holder equipped with miniservos that control the stiffness of the wire is attached to each fingertip to restrict finger movements. Vibrations detected by the HaptiTemp sensor can be actuated by the tapping motion of the tactile pins or by a buzzing minivibration motor. There is also a tiny annular Peltier device, or ThermoElectric Generator (TEG), with a
mini-vibration motor, forming thermo-vibro feedback in the palm area that can be activated by a āhotā or ācoldā signal from the HaptiTemp sensor. The haptic primary colours can also be embedded in a VR environment that can be actuated by the multimodal hand wearable. A VR application was developed to demonstrate rapid tactile actuation of edges, allowing the user to feel the contours of virtual objects. Collision detection scripts were embedded to activate the corresponding actuator in the multimodal haptic hand wearable whenever the tactile matrix simulator or hand avatar in VR collides with a virtual object. The TEG also gets warm or cold depending on the virtual object the participant has touched. Tests were conducted to explore virtual objects in 2D and 3D environments using Leap Motion
control and a VR headset (Oculus Quest 2).
Moreover, a fine-grained cutaneous feedback was developed to feel the edges or surfaces of a tactile image, such as the tactile images captured by the HaptiTemp sensor, or actuate tactile patterns in 2D or 3D virtual objects. The prototype is like an exoskeleton glove with 16 tactile actuators (tactors) on each fingertip, 80 tactile pins in total, made from commercially available P20 Braille cells. Each tactor can be controlled individually to enable the user to feel the edges or surfaces of images, such as the high-resolution tactile images captured by the HaptiTemp sensor. This hand wearable can be used to enhance the immersive experience in
a virtual reality environment. The tactors can be actuated in a tapping manner, creating a distinct form of vibration feedback as compared to the buzzing vibration produced by a mini-vibration motor. The tactile pin height can also be varied, creating a gradient of pressure on the fingertip.
Finally, the integration of the high-resolution HaptiTemp sensor, and the untethered multimodal, fine-grained haptic hand wearable is presented, forming a visuotactile system for sensing and actuating haptic primary colours. Force,
vibration, and temperature sensing tests with corresponding force, vibration, and temperature actuating tests have demonstrated a unified visual-haptic system. Aside from sensing and actuating haptic primary colours, touching the edges or surfaces of the tactile images captured by the HaptiTemp sensor was carried out using the fine-grained cutaneous feedback of the haptic hand wearable