3,375 research outputs found
Dynamic Decomposition of Spatiotemporal Neural Signals
Neural signals are characterized by rich temporal and spatiotemporal dynamics
that reflect the organization of cortical networks. Theoretical research has
shown how neural networks can operate at different dynamic ranges that
correspond to specific types of information processing. Here we present a data
analysis framework that uses a linearized model of these dynamic states in
order to decompose the measured neural signal into a series of components that
capture both rhythmic and non-rhythmic neural activity. The method is based on
stochastic differential equations and Gaussian process regression. Through
computer simulations and analysis of magnetoencephalographic data, we
demonstrate the efficacy of the method in identifying meaningful modulations of
oscillatory signals corrupted by structured temporal and spatiotemporal noise.
These results suggest that the method is particularly suitable for the analysis
and interpretation of complex temporal and spatiotemporal neural signals
Tactile Mapping and Localization from High-Resolution Tactile Imprints
This work studies the problem of shape reconstruction and object localization
using a vision-based tactile sensor, GelSlim. The main contributions are the
recovery of local shapes from contact, an approach to reconstruct the tactile
shape of objects from tactile imprints, and an accurate method for object
localization of previously reconstructed objects. The algorithms can be applied
to a large variety of 3D objects and provide accurate tactile feedback for
in-hand manipulation. Results show that by exploiting the dense tactile
information we can reconstruct the shape of objects with high accuracy and do
on-line object identification and localization, opening the door to reactive
manipulation guided by tactile sensing. We provide videos and supplemental
information in the project's website
http://web.mit.edu/mcube/research/tactile_localization.html.Comment: ICRA 2019, 7 pages, 7 figures. Website:
http://web.mit.edu/mcube/research/tactile_localization.html Video:
https://youtu.be/uMkspjmDbq
Learning Latent Space Dynamics for Tactile Servoing
To achieve a dexterous robotic manipulation, we need to endow our robot with
tactile feedback capability, i.e. the ability to drive action based on tactile
sensing. In this paper, we specifically address the challenge of tactile
servoing, i.e. given the current tactile sensing and a target/goal tactile
sensing --memorized from a successful task execution in the past-- what is the
action that will bring the current tactile sensing to move closer towards the
target tactile sensing at the next time step. We develop a data-driven approach
to acquire a dynamics model for tactile servoing by learning from
demonstration. Moreover, our method represents the tactile sensing information
as to lie on a surface --or a 2D manifold-- and perform a manifold learning,
making it applicable to any tactile skin geometry. We evaluate our method on a
contact point tracking task using a robot equipped with a tactile finger. A
video demonstrating our approach can be seen in https://youtu.be/0QK0-Vx7WkIComment: Accepted to be published at the International Conference on Robotics
and Automation (ICRA) 2019. The final version for publication at ICRA 2019 is
7 pages (i.e. 6 pages of technical content (including text, figures, tables,
acknowledgement, etc.) and 1 page of the Bibliography/References), while this
arXiv version is 8 pages (added Appendix and some extra details
Novel Tactile-SIFT Descriptor for Object Shape Recognition
Using a tactile array sensor to recognize an object often requires multiple touches at different positions. This process is prone to move or rotate the object, which inevitably increases difficulty in object recognition. To cope with the unknown object movement, this paper proposes a new tactile-SIFT descriptor to extract features in view of gradients in the tactile image to represent objects, to allow the features being invariant to object translation and rotation. The tactile-SIFT segments a tactile image into overlapping subpatches, each of which is represented using a dn-dimensional gradient vector, similar to the classic SIFT descriptor. Tactile-SIFT descriptors obtained from multiple touches form a dictionary of k words, and the bag-of-words method is then used to identify objects. The proposed method has been validated by classifying 18 real objects with data from an off-the-shelf tactile sensor. The parameters of the tactile-SIFT descriptor, including the dimension size dn and the number of subpatches sp, are studied. It is found that the optimal performance is obtained using an 8-D descriptor with three subpatches, taking both the classification accuracy and time efficiency into consideration. By employing tactile-SIFT, a recognition rate of 91.33% has been achieved with a dictionary size of 50 clusters using only 15 touches
Improved GelSight Tactile Sensor for Measuring Geometry and Slip
A GelSight sensor uses an elastomeric slab covered with a reflective membrane
to measure tactile signals. It measures the 3D geometry and contact force
information with high spacial resolution, and successfully helped many
challenging robot tasks. A previous sensor, based on a semi-specular membrane,
produces high resolution but with limited geometry accuracy. In this paper, we
describe a new design of GelSight for robot gripper, using a Lambertian
membrane and new illumination system, which gives greatly improved geometric
accuracy while retaining the compact size. We demonstrate its use in measuring
surface normals and reconstructing height maps using photometric stereo. We
also use it for the task of slip detection, using a combination of information
about relative motions on the membrane surface and the shear distortions. Using
a robotic arm and a set of 37 everyday objects with varied properties, we find
that the sensor can detect translational and rotational slip in general cases,
and can be used to improve the stability of the grasp.Comment: IEEE/RSJ International Conference on Intelligent Robots and System
Surface patch reconstruction by touching
This thesis studies the reconstruction of unknown curved surfaces in 3D through contour tracking. The implementation involves a 2-axis joystick sensor and a 4-DOF Adept robot. The joystick\u27s force sensing is combined with the Adept\u27s high positional accuracy to yield precise contact measurements.;A surface patch in 3D can be rebuilt by tracking along three concurrent curves on the surface. These data curves lie in different planes and are acquired via planar contour tracking. The Darboux frame at the curve intersection is first estimated to reflect the local geometry. Then polynomial fitting is carried out in this frame. Minimization of the total (absolute) Gaussian curvature of the surface fit effectively prevents unnecessary folding otherwise expected to result from the use of touching data. Experiments have demonstrated high accuracy of reconstruction
Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting
This paper proposes a single-shot approach for recognising clothing
categories from 2.5D features. We propose two visual features, BSP (B-Spline
Patch) and TSD (Topology Spatial Distances) for this task. The local BSP
features are encoded by LLC (Locality-constrained Linear Coding) and fused with
three different global features. Our visual feature is robust to deformable
shapes and our approach is able to recognise the category of unknown clothing
in unconstrained and random configurations. We integrated the category
recognition pipeline with a stereo vision system, clothing instance detection,
and dual-arm manipulators to achieve an autonomous sorting system. To verify
the performance of our proposed method, we build a high-resolution RGBD
clothing dataset of 50 clothing items of 5 categories sampled in random
configurations (a total of 2,100 clothing samples). Experimental results show
that our approach is able to reach 83.2\% accuracy while classifying clothing
items which were previously unseen during training. This advances beyond the
previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach
in an autonomous robot sorting system, in which the robot recognises a clothing
item from an unconstrained pile, grasps it, and sorts it into a box according
to its category. Our proposed sorting system achieves reasonable sorting
success rates with single-shot perception.Comment: 9 pages, accepted by IROS201
- …