The manual extraction of Agarwood resinous compounds is not only labour-intensive and reliant on skilled workers but also prone to wastage due to human errors. The commercial Agarwood industry has increasingly explored the use of Computer Numerical Control (CNC) machines. These machines interpret G-code scripts derived from binary images, where regions of wood marked for chiselling are represented by an RGB value of (0, 0, 0). Instead of relying on manual marking, we propose employing a deep learning-based approach. Our proposed setup involves capturing cross-sectional images using a stationary camera. These images are then transmitted to a computer for the image-based segmentation task. The produced segmented area is translated into a G-code script for the CNC machine. In this article, we present the preliminary results of a prototype of our proposed setup. Additionally, we discuss potential enhancements aimed at refining segmentation accuracy
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.