Automatic defect detection for 3D printing processes, which shares many
characteristics with change detection problems, is a vital step for quality
control of 3D printed products. However, there are some critical challenges in
the current state of practice. First, existing methods for computer
vision-based process monitoring typically work well only under specific camera
viewpoints and lighting situations, requiring expensive pre-processing,
alignment, and camera setups. Second, many defect detection techniques are
specific to pre-defined defect patterns and/or print schematics. In this work,
we approach the defect detection problem using a novel Semi-Siamese deep
learning model that directly compares a reference schematic of the desired
print and a camera image of the achieved print. The model then solves an image
segmentation problem, precisely identifying the locations of defects of
different types with respect to the reference schematic. Our model is designed
to enable comparison of heterogeneous images from different domains while being
robust against perturbations in the imaging setup such as different camera
angles and illumination. Crucially, we show that our simple architecture, which
is easy to pre-train for enhanced performance on new datasets, outperforms more
complex state-of-the-art approaches based on generative adversarial networks
and transformers. Using our model, defect localization predictions can be made
in less than half a second per layer using a standard MacBook Pro while
achieving an F1-score of more than 0.9, demonstrating the efficacy of using our
method for in-situ defect detection in 3D printing