7,942 research outputs found

    Integration of feature distributions for colour texture segmentation

    Get PDF
    This paper proposes a new framework for colour texture segmentation and determines the contribution of colour and texture. The distributions of colour and texture features provides the discrimination between different colour textured regions in an image. The proposed method was tested using different mosaic and natural images. From the results, it is evident that the incorporation of colour information enhanced the colour texture segmentation and the developed framework is effective

    Markov mezƑk a kĂ©pmodellezĂ©sben, alkalmazĂĄsuk az automatikus kĂ©pszegmentĂĄlĂĄs terĂŒletĂ©n = Markovian Image Models: Applications in Unsupervised Image Segmentation

    Get PDF
    1) KifejlesztettĂŒnk egy olyan szĂ­n Ă©s textĂșra alapĂș szegmentĂĄlĂł MRF algoritmust, amely alkalmas egy kĂ©p automatikus szegmentĂĄlĂĄsĂĄt elvĂ©gezni. Az eredmĂ©nyeinket az Image and Vision Computing folyĂłiratban publikĂĄltuk. 2) KifejlesztettĂŒnk egy Reversible Jump Markov Chain Monte Carlo technikĂĄn alapulĂł automatikus kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, melyet sikeresen alkalmaztunk szĂ­nes kĂ©pek teljesen automatikus szegmentĂĄlĂĄsĂĄra. Az eredmĂ©nyeinket a BMVC 2004 konferenciĂĄn Ă©s az Image and Vision Computing folyĂłiratban publikĂĄltuk. 3) A modell többrĂ©tegƱ tovĂĄbbfejlesztĂ©sĂ©t alkalmaztuk video objektumok szĂ­n Ă©s mozgĂĄs alapĂș szegmentĂĄlĂĄsĂĄra, melynek eredmĂ©nyeit a HACIPPR 2005 illetve az ACCV 2006 nemzetközi konferenciĂĄkon publikĂĄltuk. SzintĂ©n ehhez az alapproblĂ©mĂĄhoz kapcsolĂłdik HorvĂĄth PĂ©ter hallgatĂłmmal az optic flow szamĂ­tĂĄsĂĄval illetve szĂ­n, textĂșra Ă©s mozgĂĄs alapĂș GVF aktĂ­v kontĂșrral kapcsoltos munkĂĄink. TDK dolgozata elsƑ helyezĂ©st Ă©rt el a 2004-es helyi versenyen, az eredmĂ©nyeinket pedig a KEPAF 2004 konferenciĂĄn publikĂĄltuk. 4) HorvĂĄth PĂ©ter PhD hallgatĂłmmal illetve az franciaorszĂĄgi INRIA Ariana csoportjĂĄval, kidolgoztunk egy olyan kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, amely a szegmentĂĄlandĂł objektum alakjĂĄt is figyelembe veszi. Az eredmĂ©nyeinket az ICPR 2006 illetve az ICCVGIP 2006 konferenciĂĄn foglaltuk össze. A modell elƑzmĂ©nyekĂ©nt kidolgoztunk tovĂĄbbĂĄ egy alakzat-momemntumokon alapulĂł aktĂ­v kontĂșr modellt, amelyet a HACIPPR 2005 konferenciĂĄn publikĂĄltunk. | 1) We have proposed a monogrid MRF model which is able to combine color and texture features in order to improve the quality of segmentation results. We have also solved the estimation of model parameters. This work has been published in the Image and Vision Computing journal. 2) We have proposed an RJMCMC sampling method which is able to identify multi-dimensional Gaussian mixtures. Using this technique, we have developed a fully automatic color image segmentation algorithm. Our results have been published at BMVC 2004 international conference and in the Image and Vision Computing journal. 3) A new multilayer MRF model has been proposed which is able to segment an image based on multiple cues (such as color, texture, or motion). This work has been published at HACIPPR 2005 and ACCV 2006 international conferences. The work on optic flow computation and color-, texture-, and motion-based GVF active contours doen with my student, Mr. Peter Horvath, won a first price at the local Student Research Competition in 2004. Results have been presented at KEPAF 2004 conference. 4) A new shape prior, called 'gas of circles' has been introduced using active contour models. This work is done in collaboration with the Ariana group of INRIA, France and my PhD student, Mr. Peter Horvath. Results are published at the ICPR 2006 and ICCVGIP 2006 conferences. A preliminary study on active contour models using shape-moments has also been done, these results are published at HACIPPR 2005

    Model-based learning of local image features for unsupervised texture segmentation

    Full text link
    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images

    Perceptual-based textures for scene labeling: a bottom-up and a top-down approach

    Get PDF
    Due to the semantic gap, the automatic interpretation of digital images is a very challenging task. Both the segmentation and classification are intricate because of the high variation of the data. Therefore, the application of appropriate features is of utter importance. This paper presents biologically inspired texture features for material classification and interpreting outdoor scenery images. Experiments show that the presented texture features obtain the best classification results for material recognition compared to other well-known texture features, with an average classification rate of 93.0%. For scene analysis, both a bottom-up and top-down strategy are employed to bridge the semantic gap. At first, images are segmented into regions based on the perceptual texture and next, a semantic label is calculated for these regions. Since this emerging interpretation is still error prone, domain knowledge is ingested to achieve a more accurate description of the depicted scene. By applying both strategies, 91.9% of the pixels from outdoor scenery images obtained a correct label
    • 

    corecore