Accurately annotated ultrasonic images are vital components of a high-quality
medical report. Hospitals often have strict guidelines on the types of
annotations that should appear on imaging results. However, manually inspecting
these images can be a cumbersome task. While a neural network could potentially
automate the process, training such a model typically requires a dataset of
paired input and target images, which in turn involves significant human
labour. This study introduces an automated approach for detecting annotations
in images. This is achieved by treating the annotations as noise, creating a
self-supervised pretext task and using a model trained under the Noise2Noise
scheme to restore the image to a clean state. We tested a variety of model
structures on the denoising task against different types of annotation,
including body marker annotation, radial line annotation, etc. Our results
demonstrate that most models trained under the Noise2Noise scheme outperformed
their counterparts trained with noisy-clean data pairs. The costumed U-Net
yielded the most optimal outcome on the body marker annotation dataset, with
high scores on segmentation precision and reconstruction similarity. We
released our code at https://github.com/GrandArth/UltrasonicImage-N2N-Approach.Comment: 10 pages, 7 figure