3 research outputs found
Alpha-N: Shortest Path Finder Automated Delivery Robot with Obstacle Detection and Avoiding System
Alpha N A self-powered, wheel driven Automated Delivery Robot is presented in
this paper. The ADR is capable of navigating autonomously by detecting and
avoiding objects or obstacles in its path. It uses a vector map of the path and
calculates the shortest path by Grid Count Method of Dijkstra Algorithm.
Landmark determination with Radio Frequency Identification tags are placed in
the path for identification and verification of source and destination, and
also for the recalibration of the current position. On the other hand, an
Object Detection Module is built by Faster RCNN with VGGNet16 architecture for
supporting path planning by detecting and recognizing obstacles. The Path
Planning System is combined with the output of the GCM, the RFID Reading System
and also by the binary results of ODM. This PPS requires a minimum speed of 200
RPM and 75 seconds duration for the robot to successfully relocate its position
by reading an RFID tag. In the result analysis phase, the ODM exhibits an
accuracy of 83.75 percent, RRS shows 92.3 percent accuracy and the PPS
maintains an accuracy of 85.3 percent. Stacking all these 3 modules, the ADR is
built, tested and validated which shows significant improvement in terms of
performance and usability comparing with other service robots.Comment: 12 pages, 7 figures, To be appear in the proceedings of 12th Asian
Conference on Intelligent Information and Database Systems 23-26 March 2020
Phuket, Thailan
Multi-Channel Convolutional Neural Network Based 3D Object Detection for Indoor Robot Environmental Perception
Environmental perception is a vital feature for service robots when working in an indoor environment for a long time. The general 3D reconstruction is a low-level geometric information description that cannot convey semantics. In contrast, higher level perception similar to humans requires more abstract concepts, such as objects and scenes. Moreover, the 2D object detection based on images always fails to provide the actual position and size of an object, which is quite important for a robot’s operation. In this paper, we focus on the 3D object detection to regress the object’s category, 3D size, and spatial position through a convolutional neural network (CNN). We propose a multi-channel CNN for 3D object detection, which fuses three input channels including RGB, depth, and bird’s eye view (BEV) images. We also propose a method to generate 3D proposals based on 2D ones in the RGB image and semantic prior. Training and test are conducted on the modified NYU V2 dataset and SUN RGB-D dataset in order to verify the effectiveness of the algorithm. We also carry out the actual experiments in a service robot to utilize the proposed 3D object detection method to enhance the environmental perception of the robot