3 research outputs found

    Autonomous decision making in a bioinspired adaptive robotic anchoring module

    Get PDF
    This paper proposes a bioinspired adaptive anchoring module that can be integrated into robots to enhance their mobility and manipulation abilities. The design of the module is inspired by the structure of the mouth in Chilean lamprey (Mordacia lapicida) where a combination of suction and several arrays of teeth with different sizes around the mouth opening is used for catching preys and anchoring onto them. The module can deploy a suitable mode of attachment, via teeth or vacuum suction, to different contact surfaces in response to the textural properties of those surfaces. In order to make a decision on the suitable mode of attachment, an original dataset of 500 images of outdoor and indoor surfaces was used to train a visual surface examination model using YOLOv3; a virtually real-time object detection algorithm. The mean average precision of the trained model was calculated to be 91%. We have conducted a series of pull-out tests to characterize the module’s strength of attachments. The results of the experiments indicate that the anchoring module can withstand an applied detachment force of up to 70N and 30N when attached using teeth and vacuum suction, respectively

    Faster R-CNN-based Decision Making in a Novel Adaptive Dual-Mode Robotic Anchoring System

    Get PDF
    This paper proposes a novel adaptive anchoring module that can be integrated into robots to enhance their mobility and manipulation abilities. The module can deploy a suitable mode of attachment, via spines or vacuum suction, to different contact surfaces in response to the textural properties of the surfaces. In order to make a decision on the suitable mode of attachment, an original dataset of 100 images of outdoor and indoor surfaces was enhanced using a WGAN-GP generating an additional 200 synthetic images. The enhanced dataset was then used to train a visual surface examination model using Faster R-CNN. The addition of synthetic images increased the mean average precision of the Faster R-CNN model from 81.6% to 93.9%. We have also conducted a series of load tests to characterize the module’s strength of attachments. The results of the experiments indicate that the anchoring module can withstand an applied detachment force of around 22N and 20N when attached using spines and vacuum suction on the ideal surfaces, respectively

    Vision-Based Soft Mobile Robot Inspired by Silkworm Body and Movement Behavior

    Get PDF
    Designing an inexpensive, low-noise, safe for individual, mobile robot with an efficient vision system represents a challenge. This paper proposes a soft mobile robot inspired by the silkworm body structure and moving behavior. Two identical pneumatic artificial muscles (PAM) have been used to design the body of the robot by sewing the PAMs longitudinally. The proposed robot moves forward, left, and right in steps depending on the relative contraction ratio of the actuators. The connection between the two artificial muscles gives the steering performance at different air pressures of each PAM. A camera (eye) integrated into the proposed soft robot helps it to control its motion and direction. The silkworm soft robot detects a specific object and tracks it continuously. The proposed vision system is used to help with automatic tracking based on deep learning platforms with real-time live IR camera. The object detection platform, named, YOLOv3 is used effectively to solve the challenge of detecting high-speed tiny objects like Tennis balls. The model is trained with a dataset consisting of images of   Tennis balls. The work is simulated with Google Colab and then tested in real-time on an embedded device mated with a fast GPU called Jetson Nano development kit. The presented object follower robot is cheap, fast-tracking, and friendly to the environment. The system reaches a 99% accuracy rate during training and testing. Validation results are obtained and recorded to prove the effectiveness of this novel silkworm soft robot. The research contribution is designing and implementing a soft mobile robot with an effective vision system
    corecore