1,919 research outputs found

    Model-based learning of local image features for unsupervised texture segmentation

    Full text link
    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images

    Impact of object extraction methods on classification performance in surface inspection systems

    Get PDF
    In surface inspection applications, the main goal is to detect all areas which might contain defects or unacceptable imperfections, and to classify either every single 'suspicious' region or the investigated part as a whole. After an image is acquired by the machine vision hardware, all pixels that deviate from a pre-defined 'ideal' master image are set to a non-zero value, depending on the magnitude of deviation. This procedure leads to so-called "contrast images", in which accumulations of bright pixels may appear, representing potentially defective areas. In this paper, various methods are presented for grouping these bright pixels together into meaningful objects, ranging from classical image processing techniques to machine-learning-based clustering approaches. One important issue here is to find reasonable groupings even for non-connected and widespread objects. In general, these objects correspond either to real faults or to pseudo-errors that do not affect the surface quality at all. The impact of different extraction methods on the accuracy of image classifiers will be studied. The classifiers are trained with feature vectors calculated for the extracted objects found in images labeled by the user and showing surfaces of production items. In our investigation artificially created contrast images will be considered as well as real ones recorded on-line at a CD imprint production and at an egg inspection system. © Springer-Verlag 2009
    corecore