52 research outputs found

    Plant image retrieval using color, shape and texture features

    Get PDF
    We present a content-based image retrieval system for plant image retrieval, intended especially for the house plant identification problem. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging.We studied the suitability of various well-known color, shape and texture features for this problem, as well as introducing some new texture matching techniques and shape features. Feature extraction is applied after segmenting the plant region from the background using the max-flow min-cut technique. Results on a database of 380 plant images belonging to 78 different types of plants show promise of the proposed new techniques and the overall system: in 55% of the queries, the correct plant image is retrieved among the top-15 results. Furthermore, the accuracy goes up to 73% when a 132-image subset of well-segmented plant images are considered

    Fuzzy Based Texton Binary Shape Matrix (FTBSM) for Texture Classification

    Get PDF
    Texton is a extensively applied approach for texture analysis. This technique shows a strong dependence on certain number of parameters. Unfortunately, each variation of values of any parameter may affect the texture characterization performance. Moreover, micro structure texton is unable to extract texture features which also have a negative effect on the classification task. This paper, deals with a new descriptor which avoids the drawbacks mentioned above. To address the above, the present paper derives a new descriptor called Fuzzy Based Texton Binary Shape Matrix (FTBSM) for clear variation of any feature/parameter. The proposed FTBSM are defined based on similarity of neighboring edges on a 3D7;3 neighborhood. With micro-structures serving as a bridge for extracting shape features and it effectively integrates color, texture and shape component information as a whole for texture classification. The proposed FTBSM algorithm exhibits low dimensionality. The proposed FTBSM method is tested on Vistex and Akarmarble texture datasets of natural images. The results demonstrate that it is much more efficient and effective than representative feature descriptors, such as logical operators and GLCM and LBP, for texture classification

    Content-Dependent Image Search System for Aggregation of Color, Shape and Texture Features

    Get PDF
    The existing image search system often faces difficulty to find a appropriate retrieved image corresponding to an image query. The difficulty is commonly caused by that the usersÒ€ℒ intention for searching image is different with dominant information of the image collected from feature extraction. In this paper we present a new approach for content-dependent image search system. The system utilizes information of color distribution inside an image and detects a cloud of clustered colors as something - supposed as an object. We applies segmentation of image as content-dependent process before feature extraction in order to identify is there any object or not inside an image. The system extracts 3 features, which are color, shape, and texture features and aggregates these features for similarity measurement between an image query and image database. HSV histogram color is used to extract color feature of image. While the shape feature extraction used Connected Component Labeling (CCL) which is calculated the area value, equivalent diameter, extent, convex hull, solidity, eccentricity, and perimeter of each object. The texture feature extraction used Leung Malik (LM)Ò€ℒs approach with 15 kernels.  For applicability of our proposed system, we applied the system with benchmark 1000 image SIMPLIcity dataset consisting of 10 categories namely Africans, beaches, buildings historians, buses, dinosaurs, elephants, roses, horses, mountains, and food. The experimental results performed 62% accuracy rate to detect objects by color feature, 71% by texture feature, 60% by shape feature, 72% by combined color-texture feature, 67% by combined color-shape feature, 72 % combined texture-shape features and 73% combined all features

    РаспознаваниС состава Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ Π½Π° ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΈ Π² сфСрС элСктронной ΠΊΠΎΠΌΠΌΠ΅Ρ€Ρ†ΠΈΠΈ с использованиСм Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Ρ… сСтСй

    Get PDF
    Objectives. Development of new approach for recognizing the fabric composition of clothing in e-commerce images by using generative adversarial network(GAN) to generate synthetic images of clothing with known fabric composition, to be used to train the CNN to classify the fabric composition of real clothing images. Instead of a classic clothing image, a copy is generated with the material zoomed to fibers and fabric structure.Methods. The main methods to recognize the fabric composition of the clothing image in the e-commerce are the creation and annotation of a dataset for the neural network training, synthesis of the fabric of clothing, the choice of architecture and its modification, validation and testing, and interpretation of the results.Results. Experimental results with the constructed method show that it is effective for accurately recognizing the fabric composition of e-commerce clothing to be used to improve search and browsing on websites.Conclusion. In the course of the experiment, using a generative adversarial network, a data set of e-commerce products was synthesized and annotated, neural networks were built to recognize the composition of the fabric of clothing items. The results of the study showed that the new approach for recognizing the fabric of clothing provides higher accuracy in comparison with already known methods, in addition, the use of the attention model also gives good results to improve the metrics.Π¦Π΅Π»ΠΈ. ΠŸΠΎΡΡ‚Π°Π²Π»Π΅Π½Π° Ρ†Π΅Π»ΡŒ Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚Π°Ρ‚ΡŒ Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΠΎΠ΄Ρ…ΠΎΠ΄ ΠΊ Ρ€Π°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΡŽ состава Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ Π½Π° ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΈ Π² сфСрС элСктронной ΠΊΠΎΠΌΠΌΠ΅Ρ€Ρ†ΠΈΠΈ ΠΏΡƒΡ‚Π΅ΠΌ использования Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΈΠ²Π½ΠΎ-ΡΠΎΡΡ‚ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎΠΉ сСти для создания синтСтичСских ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ с извСстным составом Ρ‚ΠΊΠ°Π½ΠΈ, ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌΡ‹Ρ… Π·Π°Ρ‚Π΅ΠΌ для обучСния свСрточной Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΎΠΉ сСти ΠΏΡ€ΠΈ классификации состава Ρ‚ΠΊΠ°Π½ΠΈ Ρ€Π΅Π°Π»ΡŒΠ½Ρ‹Ρ… ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹. ВмСсто классичСского изобраТСния ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚Π° ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ гСнСрируСтся копия, Ρƒ ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ ΠΌΠ°Ρ‚Π΅Ρ€ΠΈΠ°Π» ΡƒΠ²Π΅Π»ΠΈΡ‡Π΅Π½ Π΄ΠΎ Π²ΠΎΠ»ΠΎΠΊΠΎΠ½ ΠΈ структуры Ρ‚ΠΊΠ°Π½ΠΈ.ΠœΠ΅Ρ‚ΠΎΠ΄Ρ‹. ΠžΡΠ½ΠΎΠ²Π½Ρ‹ΠΌΠΈ ΠΌΠ΅Ρ‚ΠΎΠ΄Π°ΠΌΠΈ распознавания состава Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ Π½Π° ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΈ Π² сфСрС элСктронной ΠΊΠΎΠΌΠΌΠ΅Ρ€Ρ†ΠΈΠΈ ΡΠ²Π»ΡΡŽΡ‚ΡΡ созданиС ΠΈ аннотация Π½Π°Π±ΠΎΡ€ΠΎΠ² Π΄Π°Π½Π½Ρ‹Ρ… для обучСния Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Ρ… сСтСй, синтСз ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹, Π²Ρ‹Π±ΠΎΡ€ Π°Ρ€Ρ…ΠΈΡ‚Π΅ΠΊΡ‚ΡƒΡ€Ρ‹ ΠΈ Π΅Π΅ модификация, валидация ΠΈ ΠΏΡ€ΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ тСстов, Π° Ρ‚Π°ΠΊΠΆΠ΅ интСрпрСтация Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΎΠ².Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹. Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹ экспСримСнтов, ΠΏΡ€ΠΎΠ²Π΅Π΄Π΅Π½Π½Ρ‹Ρ… с ΠΏΠΎΠΌΠΎΡ‰ΡŒΡŽ ΠΏΡ€Π΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ…ΠΎΠ΄Π°, ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°ΡŽΡ‚ Π΅Π³ΠΎ ΡΡ„Ρ„Π΅ΠΊΡ‚ΠΈΠ²Π½ΠΎΡΡ‚ΡŒ ΠΏΡ€ΠΈ Ρ‚ΠΎΡ‡Π½ΠΎΠΌ ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½ΠΈΠΈ состава Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ Π² сфСрС элСктронной ΠΊΠΎΠΌΠΌΠ΅Ρ€Ρ†ΠΈΠΈ, Ρ‡Ρ‚ΠΎ позволяСт ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π΄Π°Π½Π½Ρ‹ΠΉ ΠΌΠ΅Ρ‚ΠΎΠ΄ для ΡƒΠ»ΡƒΡ‡ΡˆΠ΅Π½ΠΈΡ поиска ΠΈ просмотра Π½Π° Π²Π΅Π±-сайтах.Π—Π°ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅. ΠŸΡ€ΠΈ ΠΏΠΎΠΌΠΎΡ‰ΠΈ Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΈΠ²Π½ΠΎ-ΡΠΎΡΡ‚ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎΠΉ сСти Π±Ρ‹Π» синтСзирован Π½Π°Π±ΠΎΡ€ Π΄Π°Π½Π½Ρ‹Ρ… Ρ‚ΠΎΠ²Π°Ρ€ΠΎΠ² элСктронной ΠΊΠΎΠΌΠΌΠ΅Ρ€Ρ†ΠΈΠΈ, ΠΏΡ€ΠΎΠΈΠ·Π²Π΅Π΄Π΅Π½Π° Π΅Π³ΠΎ аннотация, построСны Π½Π΅ΠΉΡ€ΠΎΠ½Π½Ρ‹Π΅ сСти для распознавания состава Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹, ΠΏΡ€ΠΎΠ²Π΅Π΄Π΅Π½ΠΎ сравнСниС Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΎΠ². Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹ исслСдования ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ, Ρ‡Ρ‚ΠΎ Π½ΠΎΠ²Ρ‹ΠΉ ΠΏΠΎΠ΄Ρ…ΠΎΠ΄ для распознавания Ρ‚ΠΊΠ°Π½ΠΈ ΠΏΡ€Π΅Π΄ΠΌΠ΅Ρ‚ΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ‹ ΠΎΠ±Π»Π°Π΄Π°Π΅Ρ‚ высокой Ρ‚ΠΎΡ‡Π½ΠΎΡΡ‚ΡŒΡŽ Π² сравнСнии с ΡƒΠΆΠ΅ извСстными ΠΌΠ΅Ρ‚ΠΎΠ΄Π°ΠΌΠΈ. Π”ΠΎΠΏΠΎΠ»Π½ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΠ΅ использованиС ΠΌΠΎΠ΄Π΅Π»ΠΈ внимания Ρ‚Π°ΠΊΠΆΠ΅ Π΄Π°Π΅Ρ‚ Ρ…ΠΎΡ€ΠΎΡˆΠΈΠ΅ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹, Ρ‡Ρ‚ΠΎ отраТаСтся Π² ΡƒΠ»ΡƒΡ‡ΡˆΠ΅Π½ΠΈΠΈ ΠΌΠ΅Ρ‚Ρ€ΠΈΠΊ

    Content Based Image Retrieval (CBIR) in Remote Clinical Diagnosis and Healthcare

    Full text link
    Content-Based Image Retrieval (CBIR) locates, retrieves and displays images alike to one given as a query, using a set of features. It demands accessible data in medical archives and from medical equipment, to infer meaning after some processing. A problem similar in some sense to the target image can aid clinicians. CBIR complements text-based retrieval and improves evidence-based diagnosis, administration, teaching, and research in healthcare. It facilitates visual/automatic diagnosis and decision-making in real-time remote consultation/screening, store-and-forward tests, home care assistance and overall patient surveillance. Metrics help comparing visual data and improve diagnostic. Specially designed architectures can benefit from the application scenario. CBIR use calls for file storage standardization, querying procedures, efficient image transmission, realistic databases, global availability, access simplicity, and Internet-based structures. This chapter recommends important and complex aspects required to handle visual content in healthcare.Comment: 28 pages, 6 figures, Book Chapter from "Encyclopedia of E-Health and Telemedicine

    Human-Centered Content-Based Image Retrieval

    Get PDF
    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image Retrieval (CBIR). In contrast with most purely technological approaches, the thesis Human-Centered Content-Based Image Retrieval approaches the problem from a human/user centered perspective. Psychophysical experiments were conducted in which people were asked to categorize colors. The data gathered from these experiments was fed to a Fast Exact Euclidean Distance (FEED) transform (Schouten & Van den Broek, 2004), which enabled the segmentation of color space based on human perception (Van den Broek et al., 2008). This unique color space segementation was exploited for texture analysis and image segmentation, and subsequently for full-featured CBIR. In addition, a unique CBIR-benchmark was developed (Van den Broek et al., 2004, 2005). This benchmark was used to explore what and how several parameters (e.g., color and distance measures) of the CBIR process influence retrieval results. In contrast with other research, users judgements were assigned as metric. The online IR and CBIR system Multimedia for Art Retrieval (M4ART) (URL: http://www.m4art.org) has been (partly) founded on the techniques discussed in this thesis. References: - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2004). The utilization of human color categorization for content-based image retrieval. Proceedings of SPIE (Human Vision and Electronic Imaging), 5292, 351-362. [see also Chapter 7] - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2005). Content-Based Image Retrieval Benchmarking: Utilizing Color Categories and Color Distributions. Journal of Imaging Science and Technology, 49(3), 293-301. [see also Chapter 8] - Broek, E.L. van den, Schouten, Th.E., and Kisters, P.M.F. (2008). Modeling Human Color Categorization. Pattern Recognition Letters, 29(8), 1136-1144. [see also Chapter 5] - Schouten, Th.E. and Broek, E.L. van den (2004). Fast Exact Euclidean Distance (FEED) transformation. In J. Kittler, M. Petrou, and M. Nixon (Eds.), Proceedings of the 17th IEEE International Conference on Pattern Recognition (ICPR 2004), Vol 3, p. 594-597. August 23-26, Cambridge - United Kingdom. [see also Appendix C

    Segmentasi Citra pada Robot Sepak Bola Beroda Menggunakan Multilayer Neural Network dan Fitur Warna HSV

    Get PDF
    Robot sepak bola beroda merupakan robot beroda yang dikembangkan untuk bermain sepak bola secara full autonomous. Robot bertanding secara tim melawan tim lain pada lapangan indoor yang telah disesuaikan ukurannya. Selama pertandingan, tidak diperbolehkan adanya campur tangan manusia. Pada robot, telah di pasang sebuah kamera dengan tujuan untukΒ  menemukan di mana objek penting berada. Salah satu tahapan sebelum mendeteksi objek adalah segmentasi. Segmentasi citra bertujuan untuk memisahkan objek dengan latar belakang atau membagi citra ke dalam beberapa daerah dengan setiap daerah memiliki kemiripan atribut. Salah satu cara untuk melakukan segmentasi citra adalah dengan mengklasifikasikan tiap piksel pada citra sebagai objek tertentu maupun latar belakang. Pada penelitian ini, dilakukan klasifikasi tiap piksel pada ruang warna HSV menjadi 6 kelas. Yaitu: kawan (cyan), lawan (magenta), lapangan (hijau), garis lapangan (putih), bola (orange), dan objek lain (hitam). Proses klasifikasi dilakukan dengan menerapkan model Multilayer Neural Network. Kemudian hasil klasifikasi tersebut digunakan untuk membangun lookup table yang akan digunakan untuk klasifikasi tiap piksel warna secara cepat pada komputer robot. Dari hasil uji coba dan fine tuning terhadap hyperparameter dan arsitektur pada multilayer neural network, didapatkan nilai error rata-rata terkecil yaitu 0.16%. Kemudian dari evaluasi hasil segmentasi, diperoleh error rata-rata sebesar 19.37%

    Texture Structure Analysis

    Get PDF
    abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.Dissertation/ThesisPh.D. Electrical Engineering 201

    A generic framework for colour texture segmentation

    Get PDF
    This thesis proposes a novel method to combine the colour and the texture for colour texture segmentation. The objective of this research work is to derive a framework for colour texture segmentation and to determine the contribution of colour in colour texture analysis. The colour texture processing is based on the feature extraction from colour-textured images. The texture features were obtained from the luminance plane along with the colour features from the chrominance planes. Based on the above mentioned approach, a method was developed for colour texture segmentation. The proposed method unifies colour and texture features to solve the colour texture segmentation problem. Two of the grey scale texture analysis techniques, Local Binary Pattern (LBP) and Discrete Cosine Transform (DCT) based filter approach were extended to colour images. An unsupervised fc-means clustering was used to cluster pixels in the chrominance planes. Non-parametric test was used to test the similarity between colour texture regions. An unsupervised texture segmentation method was followed to obtain the segmented image. The evaluation of the segmentation was based on the ROC curves. A quantitative estimation of colour and texture performance in segmentation was presented. The use of different colour spaces was also investigated in this study. The proposed method was tested using different mosaic and natural images obtained from VisTex and other predominant image database used in computer vision. The applications for the proposed colour texture segmentation method are, Irish Script On Screen (ISOS) images for the segmentation of the colour textured regions in the document, skin cancer images to identify the diseased area, and Sediment Profile Imagery (SPI) to segment underwater images. The inclusion of colour and texture as distributions of regions provided a good discrimination of the colour and the texture. The results indicated that the incorporation of colour information enhanced the texture analysis techniques and the methodology proved effective and efficient

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews
    • …
    corecore