6,804 research outputs found

    Model-based learning of local image features for unsupervised texture segmentation

    Full text link
    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images

    A comparative study of image processing thresholding algorithms on residual oxide scale detection in stainless steel production lines

    Get PDF
    The present work is intended for residual oxide scale detection and classification through the application of image processing techniques. This is a defect that can remain in the surface of stainless steel coils after an incomplete pickling process in a production line. From a previous detailed study over reflectance of residual oxide defect, we present a comparative study of algorithms for image segmentation based on thresholding methods. In particular, two computational models based on multi-linear regression and neural networks will be proposed. A system based on conventional area camera with a special lighting was installed and fully integrated in an annealing and pickling line for model testing purposes. Finally, model approaches will be compared and evaluated their performance..Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Assessment of leaf cover and crop soil cover in weed harrowing research using digital images

    Get PDF
    Objective assessment of crop soil cover, defined as the percentage of leaf cover that has been buried in soil due to weed harrowing, is crucial to further progress in post-emergence weed harrowing research. Up to now, crop soil cover has been assessed by visual scores, which are biased and context dependent. The aim of this study was to investigate whether digital image analysis is a feasible method to estimate crop soil cover in the early growth stages of cereals. Two main questions were examined: (1) how to capture suitable digital images under field conditions with a standard high-resolution digital camera and (2) how to analyse the images with an automated digital image analysis procedure. The importance of light conditions, camera angle, size of recorded area, growth stage and direction of harrowing were investigated in order to establish a standard for image capture and an automated image analysis procedure based on the excess green colour index was developed. The study shows that the automated digital image analysis procedure provided reliable estimations of leaf cover, defined as the as the proportion of pixels in digital images determined to be green, which were used to estimate crop soil cover. A standard for image capture is suggested and it is recommended to use digital image analysis to estimated crop soil cover in future research. The prospects of using digital image analysis in future weed harrowing research are discussed

    Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image

    Get PDF
    Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with their corresponding proportions (or abundances). Endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step. Recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances. This paper proposes to define the MRF sites using similarity regions. These regions are built using a self-complementary area filter that stems from the morphological theory. This kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is proposed to estimate the abundances, the class labels, the noise variance, and the corresponding hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters. Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of the algorithm

    Fundamental remote sensing science research program. Part 1: Status report of the mathematical pattern recognition and image analysis project

    Get PDF
    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth

    Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery

    Get PDF
    This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data

    A 3D descriptor to detect task-oriented grasping points in clothing

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Manipulating textile objects with a robot is a challenging task, especially because the garment perception is difficult due to the endless configurations it can adopt, coupled with a large variety of colors and designs. Most current approaches follow a multiple re-grasp strategy, in which clothes are sequentially grasped from different points until one of them yields a recognizable configuration. In this work we propose a method that combines 3D and appearance information to directly select a suitable grasping point for the task at hand, which in our case consists of hanging a shirt or a polo shirt from a hook. Our method follows a coarse-to-fine approach in which, first, the collar of the garment is detected and, next, a grasping point on the lapel is chosen using a novel 3D descriptor. In contrast to current 3D descriptors, ours can run in real time, even when it needs to be densely computed over the input image. Our central idea is to take advantage of the structured nature of range images that most depth sensors provide and, by exploiting integral imaging, achieve speed-ups of two orders of magnitude with respect to competing approaches, while maintaining performance. This makes it especially adequate for robotic applications as we thoroughly demonstrate in the experimental section.Peer ReviewedPostprint (author's final draft
    corecore