14 research outputs found

    Bridging neuroscience and robotics: spiking neural networks in action

    Get PDF
    Robots are becoming increasingly sophisticated in the execution of complex tasks. However, an area that requires development is the ability to act in dynamically changing environments. To advance this, developments have turned towards understanding the human brain and applying this to improve robotics. The present study used electroencephalogram (EEG) data recorded from 54 human participants whilst they performed a two-choice task. A build-up of motor activity starting around 400 ms before response onset, also known as the lateralized readiness potential (LRP), was observed. This indicates that actions are not simply binary processes but rather, response-preparation is gradual and occurs in a temporal window that can interact with the environment. In parallel, a robot arm executing a pick-and-place task was developed. The understanding from the EEG data and the robot arm were integrated into the final system, which included cell assemblies (CAs)—a simulated spiking neural network—to inform the robot to place the object left or right. Results showed that the neural data from the robot simulation were largely consistent with the human data. This neurorobotics study provides an example of how to integrate human brain recordings with simulated neural networks in order to drive a robot

    Functional mimicry of Ruffini receptors with fibre Bragg gratings and deep neural networks enables a bio-inspired large-area tactile-sensitive skin

    Get PDF
    Collaborative robots are expected to physically interact with humans in daily living and the workplace, including industrial and healthcare settings. A key related enabling technology is tactile sensing, which currently requires addressing the outstanding scientific challenge to simultaneously detect contact location and intensity by means of soft conformable artificial skins adapting over large areas to the complex curved geometries of robot embodiments. In this work, the development of a large-area sensitive soft skin with a curved geometry is presented, allowing for robot total-body coverage through modular patches. The biomimetic skin consists of a soft polymeric matrix, resembling a human forearm, embedded with photonic fibre Bragg grating transducers, which partially mimics Ruffini mechanoreceptor functionality with diffuse, overlapping receptive fields. A convolutional neural network deep learning algorithm and a multigrid neuron integration process were implemented to decode the fibre Bragg grating sensor outputs for inference of contact force magnitude and localization through the skin surface. Results of 35 mN (interquartile range 56 mN) and 3.2 mm (interquartile range 2.3 mm) median errors were achieved for force and localization predictions, respectively. Demonstrations with an anthropomorphic arm pave the way towards artificial intelligence based integrated skins enabling safe human–robot cooperation via machine intelligence

    Hybrid Architectures for Object Pose and Velocity Tracking at the Intersection of Kalman Filtering and Machine Learning

    Get PDF
    The study of object perception algorithms is fundamental for the development of robotic platforms capable of planning and executing actions involving objects with high precision, reliability and safety. Indeed, this topic has been vastly explored in both the robotic and computer vision research communities using diverse techniques, ranging from classical Bayesian filtering to more modern Machine Learning techniques, and complementary sensing modalities such as vision and touch. Recently, the ever-growing availability of tools for synthetic data generation has substantially increased the adoption of Deep Learning for both 2D tasks, as object detection and segmentation, and 6D tasks, such as object pose estimation and tracking. The proposed methods exhibit interesting performance on computer vision benchmarks and robotic tasks, e.g. using object pose estimation for grasp planning purposes. Nonetheless, they generally do not consider useful information connected with the physics of the object motion and the peculiarities and requirements of robotic systems. Examples are the necessity to provide well-behaved output signals for robot motion control, the possibility to integrate modelling priors on the motion of the object and algorithmic priors. These help exploit the temporal correlation of the object poses, handle the pose uncertainties and mitigate the effect of outliers. Most of these concepts are considered in classical approaches, e.g. from the Bayesian and Kalman filtering literature, which however are not as powerful as Deep Learning in handling visual data. As a consequence, the development of hybrid architectures that combine the best features from both worlds is particularly appealing in a robotic setting. Motivated by these considerations, in this Thesis, I aimed at devising hybrid architectures for object perception, focusing on the task of object pose and velocity tracking. The proposed architectures use Kalman filtering supported by state-of-the-art Deep Neural Networks to track the 6D pose and velocity of objects from images. The devised solutions exhibit state-of-the-art performance, increased modularity and do not require training to implement the actual tracking behaviors. Furthermore, they can track even fast object motions despite the possible non-negligible inference times of the adopted neural networks. Also, by relying on data-driven Kalman filtering, I explored a paradigm that enables to track the state of systems that cannot be easily modeled analytically. Specifically, I used this approach to learn the measurement model of soft 3D tactile sensors and address the problem of tracking the sliding motion of hand-held objects

    感度調整可能な3軸マルチモーダルスキンセンサーモジュールの開発

    Get PDF
    早大学位記番号:新8538早稲田大

    Automatic Fracture Characterization Using Tactile and Proximity Optical Sensing

    Get PDF
    This paper demonstrates how tactile and proximity sensing can be used to perform automatic mechanical fractures detection (surface cracks). For this purpose, a custom-designed integrated tactile and proximity sensor has been implemented. With the help of fiber optics, the sensor measures the deformation of its body, when interacting with the physical environment, and the distance to the environment's objects. This sensor slides across different surfaces and records data which are then analyzed to detect and classify fractures and other mechanical features. The proposed method implements machine learning techniques (handcrafted features, and state of the art classification algorithms). An average crack detection accuracy of ~94% and width classification accuracy of ~80% is achieved. Kruskal-Wallis results (p < 0.001) indicate statistically significant differences among results obtained when analysing only integrated deformation measurements, only proximity measurements and both deformation and proximity data. A real-time classification method has been implemented for online classification of explored surfaces. In contrast to previous techniques, which mainly rely on visual modality, the proposed approach based on optical fibers might be more suitable for operation in extreme environments (such as nuclear facilities) where radiation may damage electronic components of commonly employed sensing devices, such as standard force sensors based on strain gauges and video cameras

    An Embedded, Multi-Modal Sensor System for Scalable Robotic and Prosthetic Hand Fingers

    Get PDF
    Grasping and manipulation with anthropomorphic robotic and prosthetic hands presents a scientific challenge regarding mechanical design, sensor system, and control. Apart from the mechanical design of such hands, embedding sensors needed for closed-loop control of grasping tasks remains a hard problem due to limited space and required high level of integration of different components. In this paper we present a scalable design model of artificial fingers, which combines mechanical design and embedded electronics with a sophisticated multi-modal sensor system consisting of sensors for sensing normal and shear force, distance, acceleration, temperature, and joint angles. The design is fully parametric, allowing automated scaling of the fingers to arbitrary dimensions in the human hand spectrum. To this end, the electronic parts are composed of interchangeable modules that facilitate the echanical scaling of the fingers and are fully enclosed by the mechanical parts of the finger. The resulting design model allows deriving freely scalable and multimodally sensorised fingers for robotic and prosthetic hands. Four physical demonstrators are assembled and tested to evaluate the approach

    An Embedded, Multi-Modal Sensor System for Scalable Robotic and Prosthetic Hand Fingers

    Get PDF
    Grasping and manipulation with anthropomorphic robotic and prosthetic hands presents a scientific challenge regarding mechanical design, sensor system, and control. Apart from the mechanical design of such hands, embedding sensors needed for closed-loop control of grasping tasks remains a hard problem due to limited space and required high level of integration of different components. In this paper we present a scalable design model of artificial fingers, which combines mechanical design and embedded electronics with a sophisticated multi-modal sensor system consisting of sensors for sensing normal and shear force, distance, acceleration, temperature, and joint angles. The design is fully parametric, allowing automated scaling of the fingers to arbitrary dimensions in the human hand spectrum. To this end, the electronic parts are composed of interchangeable modules that facilitate the echanical scaling of the fingers and are fully enclosed by the mechanical parts of the finger. The resulting design model allows deriving freely scalable and multimodally sensorised fingers for robotic and prosthetic hands. Four physical demonstrators are assembled and tested to evaluate the approach
    corecore