1,426 research outputs found

    Improvement of Auto-Tracking Mobile Robot based on HSI Color Model

    Get PDF
    Auto tracking mobile robot is a device that able to detect and track a target. For an auto tracking device, the most crucial part of the system is the object identification and tracking of the moving targets. In order to improve the accuracy of identification of object in different illumination and background conditions, the implementation of HSI color model is used in image processing algorithm. In this project HSI-based color enhancement algorithm were used for object identification. This is because HSI parameter are more stable in different light and background conditions, so it is selected as the main parameters of this system. Pixy CMUcam5 is used as the vision sensor while Arduino Uno as the main microcontroller that controls all the input and output of the device. Moreover, two servo motors were used to control the pan-tilt movement of the vision sensor. Experimental results demonstrate that when HSI color-based filtering algorithm is applied to visual tracking it improves the accuracy and stability of tracking under the condition of varying brightness, or even in the low-light-level environment. Besides that, this algorithm also prevents tracking loss due to object color appears in the background

    Efficient and secure real-time mobile robots cooperation using visual servoing

    Get PDF
    This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method

    Robot Vision Pattern Recognition of the Eye and Nose Using the Local Binary Pattern Histogram Method

    Get PDF
    The local binary pattern histogram (LBPH) algorithm is a computer technique that can detect a person's face based on information stored in a database (trained model). In this research, the LBPH approach is applied for face recognition combined with the embedded platform on the actuator system. This application will be incorporated into the robot's control and processing center, which consists of a Raspberry Pi and Arduino board. The robot will be equipped with a program that can identify and recognize a human's face based on information from the person's eyes and nose. Based on the results of facial feature identification testing, the eyes were recognized 131 times (87.33%), and the nose 133 times (88.67%) out of 150 image data samples. From the test results, an accuracy rate of 88%, the partition rate of 95.23%, the recall of 30%, the specificity of 99%, and the F1-Score of 57.5% were obtained

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object
    • …
    corecore