339 research outputs found
Improved GelSight Tactile Sensor for Measuring Geometry and Slip
A GelSight sensor uses an elastomeric slab covered with a reflective membrane
to measure tactile signals. It measures the 3D geometry and contact force
information with high spacial resolution, and successfully helped many
challenging robot tasks. A previous sensor, based on a semi-specular membrane,
produces high resolution but with limited geometry accuracy. In this paper, we
describe a new design of GelSight for robot gripper, using a Lambertian
membrane and new illumination system, which gives greatly improved geometric
accuracy while retaining the compact size. We demonstrate its use in measuring
surface normals and reconstructing height maps using photometric stereo. We
also use it for the task of slip detection, using a combination of information
about relative motions on the membrane surface and the shear distortions. Using
a robotic arm and a set of 37 everyday objects with varied properties, we find
that the sensor can detect translational and rotational slip in general cases,
and can be used to improve the stability of the grasp.Comment: IEEE/RSJ International Conference on Intelligent Robots and System
Food waste as a raw material for biofuel production
With the development of economy, people's living standard is constantly improving. People's demand for food is also increasing. With the increase of food consumption, the production of food waste is also increasing. Therefore, the increase of food waste results in the increasing proportion of food waste in urban garbage. Compared with other waste, food waste is more difficult to deal with and more harmful to the environment. Because the treatment technology of food waste is not perfect, it may cause secondary pollution in the process of treatment. Food waste has always been a concern of people and there are many organic compounds and nutrients in food waste. This part can be used to produce biofuels to solve the current energy crisis, it can also be a good way to deal with food waste to prevent food waste from polluting the environment.
Starting from food waste, this paper introduces the definition, source and treatment of food waste. The treatment methods of food waste mainly include mixed landfill, incineration, anaerobic fermenta-tion, aerobic composting and livestock feed. Among them, the two treatment methods of mixed land-fill and incineration are banned by many countries because of their disadvantages. The following arti-cle introduces the definition, application and how to use food waste to produce biofuels. In this part, the production of biogas by anaerobic fermentation, transesterification and biodiesel, bioethanol and butanol by ABE fermentation are introduced respectively
Design and characterisation of novel peptide-based hydrogel for controlled delivery of therapeutics
Connecting Look and Feel: Associating the visual and tactile properties of physical materials
For machines to interact with the physical world, they must understand the
physical properties of objects and materials they encounter. We use fabrics as
an example of a deformable material with a rich set of mechanical properties. A
thin flexible fabric, when draped, tends to look different from a heavy stiff
fabric. It also feels different when touched. Using a collection of 118 fabric
sample, we captured color and depth images of draped fabrics along with tactile
data from a high resolution touch sensor. We then sought to associate the
information from vision and touch by jointly training CNNs across the three
modalities. Through the CNN, each input, regardless of the modality, generates
an embedding vector that records the fabric's physical property. By comparing
the embeddings, our system is able to look at a fabric image and predict how it
will feel, and vice versa. We also show that a system jointly trained on vision
and touch data can outperform a similar system trained only on visual data when
tested purely with visual inputs
GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger
This work describes the development of a high-resolution tactile-sensing
finger for robot grasping. This finger, inspired by previous GelSight sensing
techniques, features an integration that is slimmer, more robust, and with more
homogeneous output than previous vision-based tactile sensors. To achieve a
compact integration, we redesign the optical path from illumination source to
camera by combining light guides and an arrangement of mirror reflections. We
parameterize the optical path with geometric design variables and describe the
tradeoffs between the finger thickness, the depth of field of the camera, and
the size of the tactile sensing area. The sensor sustains the wear from
continuous use -- and abuse -- in grasping tasks by combining tougher materials
for the compliant soft gel, a textured fabric skin, a structurally rigid body,
and a calibration process that maintains homogeneous illumination and contrast
of the tactile images during use. Finally, we evaluate the sensor's durability
along four metrics that track the signal quality during more than 3000 grasping
experiments.Comment: RA-L Pre-print. 8 page
Visual-Tactile Multimodality for Following Deformable Linear Objects Using Reinforcement Learning
Manipulation of deformable objects is a challenging task for a robot. It will
be problematic to use a single sensory input to track the behaviour of such
objects: vision can be subjected to occlusions, whereas tactile inputs cannot
capture the global information that is useful for the task. In this paper, we
study the problem of using vision and tactile inputs together to complete the
task of following deformable linear objects, for the first time. We create a
Reinforcement Learning agent using different sensing modalities and investigate
how its behaviour can be boosted using visual-tactile fusion, compared to using
a single sensing modality. To this end, we developed a benchmark in simulation
for manipulating the deformable linear objects using multimodal sensing inputs.
The policy of the agent uses distilled information, e.g., the pose of the
object in both visual and tactile perspectives, instead of the raw sensing
signals, so that it can be directly transferred to real environments. In this
way, we disentangle the perception system and the learned control policy. Our
extensive experiments show that the use of both vision and tactile inputs,
together with proprioception, allows the agent to complete the task in up to
92% of cases, compared to 77% when only one of the signals is given. Our
results can provide valuable insights for the future design of tactile sensors
and for deformable objects manipulation.Comment: 8 pages, 11 figure
- …