Image Segmentation with Human-in-the-loop in Automated De-caking Process for Powder Bed Additive Manufacturing

Abstract

Additive manufacturing (AM) becomes a critical technology that increases the speed and flexibility of production and reduces the lead time for high-mix, low-volume manufacturing. One of the major bottlenecks in further increasing its productivity lies around its post-processing procedures. This work focuses on tackling a critical and inevitable step in powder-bed additive manufacturing processes, i.e., powder cleaning or de-caking. Pressing concerns can be raised with human involvement when performing this task manually. Therefore, a robot-driven automatic powder cleaning system could be an alternative to reducing time consumption and increasing safety for AM operators. However, since the color and surface texture of the powder residuals and the sintered parts are similar from a computer vision perspective, it can be challenging for robots to plan their cleaning path. This study proposes a machine learning framework incorporating image segmentation and eye tracking to de-cake the parts printed by a powder bed additive manufacturing process. The proposed framework intends to partially incorporate human biological behaviors to increase the performance of an image segmentation algorithm to assist the path planning for the robot de-caking system. The proposed framework is verified and evaluated by comparing it with the state-of-the-art image segmentation algorithms. Case studies were utilized to validate and verify the proposed human-in-the-loop algorithms. With a mean accuracy, f1-score, precision, and IoU score of 81.2%, 82.3%, 85.8%, and 66.9%, respectively, the suggested HITL eye tracking plus segmentation framework produced the best performance out of all the algorithms evaluated and compared. Regarding computational time, the suggested HITL framework matches the running times of the other test existing models, with a mean time of 0.510655 seconds and a standard deviation of 0.008387. Finally, future works and directions are presented and discussed. A significant portion of this work can be found in (Asare-Manu et al., 2023

    Similar works