970 research outputs found

    Texture fusion for batik motif retrieval system

    Get PDF
    This paper systematically investigates the effect of image texture features on batik motif retrieval performance. The retrieval process uses a query motif image to find matching motif images in a database. In this study, feature fusion of various image texture features such as Gabor, Log-Gabor, Grey Level Co-Occurrence Matrices (GLCM), and Local Binary Pattern (LBP) features are attempted in motif image retrieval. With regards to performance evaluation, both individual features and fused feature sets are applied. Experimental results show that optimal feature fusion outperforms individual features in batik motif retrieval. Among the individual features tested, Log-Gabor features provide the best result. The proposed approach is best used in a scenario where a query image containing multiple basic motif objects is applied to a dataset in which retrieved images also contain multiple motif objects. The retrieval rate achieves 84.54% for the rank 3 precision when the feature space is fused with Gabor, GLCM and Log-Gabor features. The investigation also shows that the proposed method does not work well for a retrieval scenario where the query image contains multiple basic motif objects being applied to a dataset in which the retrieved images only contain one basic motif object

    Modeling of evolving textures using granulometries

    Get PDF
    This chapter describes a statistical approach to classification of dynamic texture images, called parallel evolution functions (PEFs). Traditional classification methods predict texture class membership using comparisons with a finite set of predefined texture classes and identify the closest class. However, where texture images arise from a dynamic texture evolving over time, estimation of a time state in a continuous evolutionary process is required instead. The PEF approach does this using regression modeling techniques to predict time state. It is a flexible approach which may be based on any suitable image features. Many textures are well suited to a morphological analysis and the PEF approach uses image texture features derived from a granulometric analysis of the image. The method is illustrated using both simulated images of Boolean processes and real images of corrosion. The PEF approach has particular advantages for training sets containing limited numbers of observations, which is the case in many real world industrial inspection scenarios and for which other methods can fail or perform badly. [41] G.W. Horgan, Mathematical morphology for analysing soil structure from images, European Journal of Soil Science, vol. 49, pp. 161–173, 1998. [42] G.W. Horgan, C.A. Reid and C.A. Glasbey, Biological image processing and enhancement, Image Processing and Analysis, A Practical Approach, R. Baldock and J. Graham, eds., Oxford University Press, Oxford, UK, pp. 37–67, 2000. [43] B.B. Hubbard, The World According to Wavelets: The Story of a Mathematical Technique in the Making, A.K. Peters Ltd., Wellesley, MA, 1995. [44] H. Iversen and T. Lonnestad. An evaluation of stochastic models for analysis and synthesis of gray-scale texture, Pattern Recognition Letters, vol. 15, pp. 575–585, 1994. [45] A.K. Jain and F. Farrokhnia, Unsupervised texture segmentation using Gabor filters, Pattern Recognition, vol. 24(12), pp. 1167–1186, 1991. [46] T. Jossang and F. Feder, The fractal characterization of rough surfaces, Physica Scripta, vol. T44, pp. 9–14, 1992. [47] A.K. Katsaggelos and T. Chun-Jen, Iterative image restoration, Handbook of Image and Video Processing, A. Bovik, ed., Academic Press, London, pp. 208–209, 2000. [48] M. K¨oppen, C.H. Nowack and G. R¨osel, Pareto-morphology for color image processing, Proceedings of SCIA99, 11th Scandinavian Conference on Image Analysis 1, Kangerlussuaq, Greenland, pp. 195–202, 1999. [49] S. Krishnamachari and R. Chellappa, Multiresolution Gauss-Markov random field models for texture segmentation, IEEE Transactions on Image Processing, vol. 6(2), pp. 251–267, 1997. [50] T. Kurita and N. Otsu, Texture classification by higher order local autocorrelation features, Proceedings of ACCV93, Asian Conference on Computer Vision, Osaka, pp. 175–178, 1993. [51] S.T. Kyvelidis, L. Lykouropoulos and N. Kouloumbi, Digital system for detecting, classifying, and fast retrieving corrosion generated defects, Journal of Coatings Technology, vol. 73(915), pp. 67–73, 2001. [52] Y. Liu, T. Zhao and J. Zhang, Learning multispectral texture features for cervical cancer detection, Proceedings of 2002 IEEE International Symposium on Biomedical Imaging: Macro to Nano, pp. 169–172, 2002. [53] G. McGunnigle and M.J. Chantler, Modeling deposition of surface texture, Electronics Letters, vol. 37(12), pp. 749–750, 2001. [54] J. McKenzie, S. Marshall, A.J. Gray and E.R. Dougherty, Morphological texture analysis using the texture evolution function, International Journal of Pattern Recognition and Artificial Intelligence, vol. 17(2), pp. 167–185, 2003. [55] J. McKenzie, Classification of dynamically evolving textures using evolution functions, Ph.D. Thesis, University of Strathclyde, UK, 2004. [56] S.G. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2(R), Transactions of the American Mathematical Society, vol. 315, pp. 69–87, 1989. [57] S.G. Mallat, A theory for multiresolution signal decomposition: the wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, pp. 674–693, 1989. [58] B.S. Manjunath and W.Y. Ma, Texture features for browsing and retrieval of image data, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, pp. 837–842, 1996. [59] B.S. Manjunath, G.M. Haley and W.Y. Ma, Multiband techniques for texture classification and segmentation, Handbook of Image and Video Processing, A. Bovik, ed., Academic Press, London, pp. 367–381, 2000. [60] G. Matheron, Random Sets and Integral Geometry, Wiley Series in Probability and Mathematical Statistics, John Wiley and Sons, New York, 1975

    Machine learning methods for discriminating natural targets in seabed imagery

    Get PDF
    The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems. These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation. Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world sonar mosaic imagery. A number of technical challenges arose and these were surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation of pockmark and Sabellaria discrimination is feasible within this framework

    Automatic texture classification in manufactured paper

    Get PDF

    A hybrid deep learning approach for texture analysis

    Get PDF
    Texture classification is a problem that has various applications such as remote sensing and forest species recognition. Solutions tend to be custom fit to the dataset used but fails to generalize. The Convolutional Neural Network (CNN) in combination with Support Vector Machine (SVM) form a robust selection between powerful invariant feature extractor and accurate classifier. The fusion of classifiers shows the stability of classification among different datasets and slight improvement compared to state of the art methods. The classifiers are fused using confusion matrix after independent training of each using the same training set, then put to test. Statistical information about each classifier is fed to a confusion matrix that generates two confidence measures used in building two binary classifiers. The binary classifier is allowed to activate or deactivate a classifier during testing time based on a confidence measure obtained from the confusion matrix. The method obtained results approaching state of the art with a difference less than 1% in classification success rates. Moreover, the method was able to maintain this success rate among different datasets while other methods had failed to obtain similar stability. Two datasets had been used in this research Brodatz and Kylberg where the results came 98.17% and 99.70%. In comparison to conventional methods in the literature, it came as 98.9% and 99.64% respectively

    Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification

    Full text link
    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The d facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Our final combination outperforms the state-of-the-art without employing fine-tuning or ensemble of RGB network architectures.Comment: To appear in ISPRS Journal of Photogrammetry and Remote Sensin

    Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition

    Get PDF
    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.Ministerio de Educación y Ciencia TEC-2006-11730-C03-01Junta de Andalucía P06-TIC-01417European Union IST-2001-34124, 21677
    corecore