22 research outputs found

    A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    Get PDF
    <div><p>Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.</p></div

    Representation of ground truth objects using color shading and numbers as labels.

    No full text
    <p>Left: Brightfield image <b>X</b> of a benchmark data set with marked object edges, right: Ground truth image <b>X</b><sub>truth</sub>, with object types in numbers. Gray scales denote the value of <i>x</i><sub><i>ij</i>,<i>truth</i></sub> (0: black (background), 5: white)</p

    (Case 1) Results of changing <i>δ</i> on image segmentation parameter adaptation for benchmark data set <i>r</i> = 1.

    No full text
    <p><i>δ</i> vs. <i>R</i>. <i>R</i> values against increasing <i>δ</i> for AutoOtsu (red) and for AutoEdge (blue).</p

    Segmentation results under high artifact levels (i.e. presence of both shading and background noise) using manual selection of p keeping parameters set the same for Fig 3(b),(c) and (d) as in Fig 2(b), 2(c) and 2(d) respectively.

    No full text
    <p>Segmentation results under high artifact levels (i.e. presence of both shading and background noise) using manual selection of p keeping parameters set the same for Fig 3(b),(c) and (d) as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0165180#pone.0165180.g002" target="_blank">Fig 2(b), 2(c) and 2(d)</a> respectively.</p

    Exemplary feedforward pipeline vs. modified pipeline for the parameter adaptation of segmentation methods for benchmark images.

    No full text
    <p>An input grayscale image is first pre-processed to remove noise and shading (parameter <i>w</i> is used to affect the pre-processing outcome). The pre-processed image is then used for image segmentation using either edge detection or intensity thresholding (thresholding parameter <i>t</i> is used in this step). The segmented image is post-processed using morphological operators to remove too big/too small objects (parameter <i>s</i> defines a structuring element for image opening). Features are then extracted from the remaining objects and fed into a classification routine. This pipeline could be modified using structural changes/ parameter adaptation, where evaluation measures are used for segmentation evaluation in order to calculate optimal parameter set <b>p</b><sub>opt</sub>. Using <b>p</b><sub>opt</sub>, an optimal image segmentation is obtained.</p

    (Case 2) Object type to be found in the data set.

    No full text
    <p>Set screws encircled in green are to be found in <i>r</i> = 1.</p

    Robustness values of segmentation methods for different implementations of Case 1 (using explicit ground truth).

    No full text
    <p>Robustness values of segmentation methods for different implementations of Case 1 (using explicit ground truth).</p

    Robustness values of standard segmentation methods (StdOtsu, StdEdge) vs. feedback adaptation with one parameter i.e. <i>t</i><sub>otsu</sub> for AutoOtsu and <i>t</i><sub>edge</sub> for AutoEdge (for abstract ground truth).

    No full text
    <p>Robustness values of standard segmentation methods (StdOtsu, StdEdge) vs. feedback adaptation with one parameter i.e. <i>t</i><sub>otsu</sub> for AutoOtsu and <i>t</i><sub>edge</sub> for AutoEdge (for abstract ground truth).</p
    corecore