5 research outputs found

    Active Collision Avoidance for Human-Robot Interaction With UKF, Expert System, and Artificial Potential Field Method

    Get PDF
    With the development of Industry 4.0, the cooperation between robots and people is increasing. Therefore, man—machine security is the first problem that must be solved. In this paper, we proposed a novel methodology of active collision avoidance to safeguard the human who enters the robot's workspace. In the conventional approaches of obstacle avoidance, it is not easy for robots and humans to work safely in the common unstructured environment due to the lack of the intelligence. In this system, one Kinect is employed to monitor the workspace of the robot and detect anyone who enters the workspace of the robot. Once someone enters the working space, the human will be detected, and the skeleton of the human can be calculated in real time by the Kinect. The measurement errors increase over time, owing to the tracking error and the noise of the device. Therefore we use an Unscented Kalman Filter (UKF) to estimate the positions of the skeleton points. We employ an expert system to estimate the behavior of the human. Then let the robot avoid the human by taking different measures, such as stopping, bypassing the human or getting away. Finally, when the robot needs to execute bypassing the human in real time, to achieve this, we adopt a method called artificial potential field method to generate a new path for the robot. By using this active collision avoidance, the system can achieve the purpose that the robot is unable to touch on the human. This proposed system highlights the advantage that during the process, it can first detect the human, then analyze the motion of the human and finally safeguard the human. We experimentally tested the active collision avoidance system in real-world applications. The results of the test indicate that it can effectively ensure human security

    Gesture Controlled Collaborative Robot Arm and Lab Kit

    Get PDF
    In this paper, a mechatronics system was designed and implemented to include the subjects of artificial intelligence, control algorithms, robot servo motor control, and human-machine interface (HMI). The goal was to create an inexpensive, multi-functional robotics lab kit to promote students’ interest in STEM fields including computing and mechtronics. Industrial robotic systems have become vastly popular in manufacturing and other industries, and the demand for individuals with related skills is rapidly increasing. Robots can complete jobs that are dangerous, dull, or dirty for humans to perform. Recently, more and more collaborative robotic systems have been developed and implemented in the industry. Collaborative robots utilize artificial intelligence to become aware of and capable of interacting with a human operator in progressively natural ways. The work created a computer vision-based collaborative robotic system that can be controlled via several different methods including a touch screen HMI, hand gestures, and hard coding via the microcontroller integrated development environment (IDE). The flexibility provided in the framework resulted in an educational lab kit with varying levels of difficulty across several topics such as C and Python programming, machine learning, HMI design, and robotics. The hardware being used in this project includes a Raspberry Pi 4, an Arduino Due, a Braccio Robotics Kit, a Raspberry Pi 4 compatible vision module, and a 5-inch touchscreen display. We anticipate this education lab kit will improve the effectiveness of student learning in the field of mechatronics

    Vision-based human presence detection pipeline by means of transfer learning approach

    Get PDF
    Over the last century, industrial robots have gained an immense amount of popularity in replacing the human workers due to their highly repetitive nature. It was a twist to the industries when the concept of cooperative robots, known as cobots, has been innovated. Sharing space between the cobots and human workers has considered as the most effective way of utilizing the cobots. Keeping in mind that the safety of the human workers is always the top priority of the cobot applications in the industries, many time and efforts have been invested to improve the safeness of the cobots deployments. Yet, the utilization of deep learning technologies is rarely found in accordance with human detection in the field of research, especially the transfer learning approach, providing that the visual perception has shown to be a unique sense that still cannot be replaced by other. Hence, this thesis aimed to leverage the transfer learning approach to fine-tune the deep learning-based object detection models in the human detection task. In relation to this main goal, the objectives of the study were as follows: establish an image dataset for cobot environment from the surveillance cameras in TT Vision Holdings Berhad, formulate deep learning-based object detection models by using the transfer learning approach, and evaluate the performance of various transfer learning models in detecting the presence of human workers with relevant evaluation metrics. Image dataset has acquired from the surveillance system of TT Vision Holdings Berhad and annotated accordingly. The variations of the dataset have been considered thoroughly to ensure the models can be well-trained on the distinct features of the human workers. TensorFlow Object Detection API was used in the study to perform the fine-tuning of the one-stage object detectors. Among all the transfer learning strategies, fine-tuning has chosen since it suits the study well after the interpretation on the size-similarity matrix. A total of four EfficientDet models, two SSD models, three RetinaNet models, and four CenterNet models were deployed in the present work. As a result, SSD-MobileNetV2-FPN model has achieved 81.1% AP with 32.82 FPS, which is proposed as the best well-balanced fine-tuned model between accuracy and speed. In other case where the consideration is taken solely on either accuracy or inference speed, SSD_MobileNetV1-FPN model has attained 87.2% AP with 28.28 FPS and CenterNet-ResNet50-V1-FPN has achieved 78.0% AP with 46.52 FPS, which is proposed to be the model with best accuracy and inference speed, respectively. As a whole, it could be deduced that the transfer learning models can handle the human detection task well via the fine-tuning on the COCO-pretrained weights

    Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera

    Get PDF
    In recent years, robots that recognize people around them and provide guidance, information, and monitoring have been attracting attention. The mainstream of conventional human recognition technology is the method using a camera or laser range finder. However, it is difficult to recognize with a camera due to fluctuations in lighting 1), and it is often affected by the recognition environment such as misrecognition 2) with a person's leg and a chair's leg with a laser range finder. Therefore, we propose a human recognition method using a thermal camera that can visualize human heat. This study aims to realize human-following autonomous movement based on human recognition. In addition, the distance from the robot to the person is measured with a stereo thermal camera that uses two thermal cameras. A coaxial two-wheeled robot that is compact and capable of super-credit turning is used as a mobile robot. Finally, we conduct an autonomous movement experiment of a coaxial mobile robot based on human recognition by combining these. We performed human-following experiments on a coaxial two-wheeled robot based on human recognition using a stereo thermal camera and confirmed that it moves appropriately to the location where the recognized person is in multiple use cases (scenarios). However, the accuracy of distance measurement by stereo vision is inferior to that of laser measurement. It is necessary to improve it in the case of movement that requires more accuracy
    corecore