The conversion of raw images into quantifiable data can be a major hurdle in
experimental research, and typically involves identifying region(s) of
interest, a process known as segmentation. Machine learning tools for image
segmentation are often specific to a set of tasks, such as tracking cells, or
require substantial compute or coding knowledge to train and use. Here we
introduce an easy-to-use (no coding required), image segmentation method, using
a 15-layer convolutional neural network that can be trained on a laptop:
Bellybutton. The algorithm trains on user-provided segmentation of example
images, but, as we show, just one or even a portion of one training image can
be sufficient in some cases. We detail the machine learning method and give
three use cases where Bellybutton correctly segments images despite substantial
lighting, shape, size, focus, and/or structure variation across the regions(s)
of interest. Instructions for easy download and use, with further details and
the datasets used in this paper are available at
pypi.org/project/Bellybuttonseg.Comment: 6 Pages 3 Figure