52 research outputs found
Plant image retrieval using color, shape and texture features
We present a content-based image retrieval system for plant image retrieval, intended especially for the house plant identification problem. A plant image consists of a collection of overlapping leaves and possibly flowers, which makes the problem challenging.We studied the suitability of various well-known color, shape and texture features for this problem, as well as introducing some new texture matching techniques and shape features. Feature extraction is applied after segmenting the plant region from the background using the max-flow min-cut technique. Results on a database of 380 plant images belonging to 78 different types of plants show promise of the proposed new techniques
and the overall system: in 55% of the queries, the correct plant image is retrieved among the top-15 results. Furthermore, the accuracy goes up to 73% when a 132-image subset of well-segmented plant images are considered
Fuzzy Based Texton Binary Shape Matrix (FTBSM) for Texture Classification
Texton is a extensively applied approach for texture analysis. This technique shows a strong dependence on certain number of parameters. Unfortunately, each variation of values of any parameter may affect the texture characterization performance. Moreover, micro structure texton is unable to extract texture features which also have a negative effect on the classification task. This paper, deals with a new descriptor which avoids the drawbacks mentioned above. To address the above, the present paper derives a new descriptor called Fuzzy Based Texton Binary Shape Matrix (FTBSM) for clear variation of any feature/parameter. The proposed FTBSM are defined based on similarity of neighboring edges on a 3D7;3 neighborhood. With micro-structures serving as a bridge for extracting shape features and it effectively integrates color, texture and shape component information as a whole for texture classification. The proposed FTBSM algorithm exhibits low dimensionality. The proposed FTBSM method is tested on Vistex and Akarmarble texture datasets of natural images. The results demonstrate that it is much more efficient and effective than representative feature descriptors, such as logical operators and GLCM and LBP, for texture classification
Content-Dependent Image Search System for Aggregation of Color, Shape and Texture Features
The existing image search system often faces difficulty to find a appropriate retrieved image corresponding to an image query. The difficulty is commonly caused by that the usersΓ’β¬β’ intention for searching image is different with dominant information of the image collected from feature extraction. In this paper we present a new approach for content-dependent image search system. The system utilizes information of color distribution inside an image and detects a cloud of clustered colors as something - supposed as an object. We applies segmentation of image as content-dependent process before feature extraction in order to identify is there any object or not inside an image. The system extracts 3 features, which are color, shape, and texture features and aggregates these features for similarity measurement between an image query and image database. HSV histogram color is used to extract color feature of image. While the shape feature extraction used Connected Component Labeling (CCL) which is calculated the area value, equivalent diameter, extent, convex hull, solidity, eccentricity, and perimeter of each object. The texture feature extraction used Leung Malik (LM)Γ’β¬β’s approach with 15 kernels.ΓΒ For applicability of our proposed system, we applied the system with benchmark 1000 image SIMPLIcity dataset consisting of 10 categories namely Africans, beaches, buildings historians, buses, dinosaurs, elephants, roses, horses, mountains, and food. The experimental results performed 62% accuracy rate to detect objects by color feature, 71% by texture feature, 60% by shape feature, 72% by combined color-texture feature, 67% by combined color-shape feature, 72 % combined texture-shape features and 73% combined all features
Π Π°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΠ΅ ΡΠΎΡΡΠ°Π²Π° ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ Π½Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΈ Π² ΡΡΠ΅ΡΠ΅ ΡΠ»Π΅ΠΊΡΡΠΎΠ½Π½ΠΎΠΉ ΠΊΠΎΠΌΠΌΠ΅ΡΡΠΈΠΈ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ ΡΠ΅ΡΠ΅ΠΉ
Objectives. Development of new approach for recognizing the fabric composition of clothing in e-commerce images by using generative adversarial network(GAN) to generate synthetic images of clothing with known fabric composition, to be used to train the CNN to classify the fabric composition of real clothing images. Instead of a classic clothing image, a copy is generated with the material zoomed to fibers and fabric structure.Methods. The main methods to recognize the fabric composition of the clothing image in the e-commerce are the creation and annotation of a dataset for the neural network training, synthesis of the fabric of clothing, the choice of architecture and its modification, validation and testing, and interpretation of the results.Results. Experimental results with the constructed method show that it is effective for accurately recognizing the fabric composition of e-commerce clothing to be used to improve search and browsing on websites.Conclusion. In the course of the experiment, using a generative adversarial network, a data set of e-commerce products was synthesized and annotated, neural networks were built to recognize the composition of the fabric of clothing items. The results of the study showed that the new approach for recognizing the fabric of clothing provides higher accuracy in comparison with already known methods, in addition, the use of the attention model also gives good results to improve the metrics.Π¦Π΅Π»ΠΈ. ΠΠΎΡΡΠ°Π²Π»Π΅Π½Π° ΡΠ΅Π»Ρ ΡΠ°Π·ΡΠ°Π±ΠΎΡΠ°ΡΡ Π½ΠΎΠ²ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ ΠΊ ΡΠ°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΡ ΡΠΎΡΡΠ°Π²Π° ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ Π½Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΈ Π² ΡΡΠ΅ΡΠ΅ ΡΠ»Π΅ΠΊΡΡΠΎΠ½Π½ΠΎΠΉ ΠΊΠΎΠΌΠΌΠ΅ΡΡΠΈΠΈ ΠΏΡΡΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ Π³Π΅Π½Π΅ΡΠ°ΡΠΈΠ²Π½ΠΎ-ΡΠΎΡΡΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎΠΉ ΡΠ΅ΡΠΈ Π΄Π»Ρ ΡΠΎΠ·Π΄Π°Π½ΠΈΡ ΡΠΈΠ½ΡΠ΅ΡΠΈΡΠ΅ΡΠΊΠΈΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ Ρ ΠΈΠ·Π²Π΅ΡΡΠ½ΡΠΌ ΡΠΎΡΡΠ°Π²ΠΎΠΌ ΡΠΊΠ°Π½ΠΈ, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΡΡ
Π·Π°ΡΠ΅ΠΌ Π΄Π»Ρ ΠΎΠ±ΡΡΠ΅Π½ΠΈΡ ΡΠ²Π΅ΡΡΠΎΡΠ½ΠΎΠΉ Π½Π΅ΠΉΡΠΎΠ½Π½ΠΎΠΉ ΡΠ΅ΡΠΈ ΠΏΡΠΈ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΈΠΈ ΡΠΎΡΡΠ°Π²Π° ΡΠΊΠ°Π½ΠΈ ΡΠ΅Π°Π»ΡΠ½ΡΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ ΠΎΠ΄Π΅ΠΆΠ΄Ρ. ΠΠΌΠ΅ΡΡΠΎ ΠΊΠ»Π°ΡΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠ° ΠΎΠ΄Π΅ΠΆΠ΄Ρ Π³Π΅Π½Π΅ΡΠΈΡΡΠ΅ΡΡΡ ΠΊΠΎΠΏΠΈΡ, Ρ ΠΊΠΎΡΠΎΡΠΎΠΉ ΠΌΠ°ΡΠ΅ΡΠΈΠ°Π» ΡΠ²Π΅Π»ΠΈΡΠ΅Π½ Π΄ΠΎ Π²ΠΎΠ»ΠΎΠΊΠΎΠ½ ΠΈ ΡΡΡΡΠΊΡΡΡΡ ΡΠΊΠ°Π½ΠΈ.ΠΠ΅ΡΠΎΠ΄Ρ. ΠΡΠ½ΠΎΠ²Π½ΡΠΌΠΈ ΠΌΠ΅ΡΠΎΠ΄Π°ΠΌΠΈ ΡΠ°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΡ ΡΠΎΡΡΠ°Π²Π° ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ Π½Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΈ Π² ΡΡΠ΅ΡΠ΅ ΡΠ»Π΅ΠΊΡΡΠΎΠ½Π½ΠΎΠΉ ΠΊΠΎΠΌΠΌΠ΅ΡΡΠΈΠΈ ΡΠ²Π»ΡΡΡΡΡ ΡΠΎΠ·Π΄Π°Π½ΠΈΠ΅ ΠΈ Π°Π½Π½ΠΎΡΠ°ΡΠΈΡ Π½Π°Π±ΠΎΡΠΎΠ² Π΄Π°Π½Π½ΡΡ
Π΄Π»Ρ ΠΎΠ±ΡΡΠ΅Π½ΠΈΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΡ
ΡΠ΅ΡΠ΅ΠΉ, ΡΠΈΠ½ΡΠ΅Π· ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ, Π²ΡΠ±ΠΎΡ Π°ΡΡ
ΠΈΡΠ΅ΠΊΡΡΡΡ ΠΈ Π΅Π΅ ΠΌΠΎΠ΄ΠΈΡΠΈΠΊΠ°ΡΠΈΡ, Π²Π°Π»ΠΈΠ΄Π°ΡΠΈΡ ΠΈ ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ ΡΠ΅ΡΡΠΎΠ², Π° ΡΠ°ΠΊΠΆΠ΅ ΠΈΠ½ΡΠ΅ΡΠΏΡΠ΅ΡΠ°ΡΠΈΡ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠΎΠ².Π Π΅Π·ΡΠ»ΡΡΠ°ΡΡ. Π Π΅Π·ΡΠ»ΡΡΠ°ΡΡ ΡΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½ΡΠΎΠ², ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π½ΡΡ
Ρ ΠΏΠΎΠΌΠΎΡΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄Π°, ΠΏΠΎΠΊΠ°Π·ΡΠ²Π°ΡΡ Π΅Π³ΠΎ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΡ ΠΏΡΠΈ ΡΠΎΡΠ½ΠΎΠΌ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΠΈ ΡΠΎΡΡΠ°Π²Π° ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ Π² ΡΡΠ΅ΡΠ΅ ΡΠ»Π΅ΠΊΡΡΠΎΠ½Π½ΠΎΠΉ ΠΊΠΎΠΌΠΌΠ΅ΡΡΠΈΠΈ, ΡΡΠΎ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π΄Π°Π½Π½ΡΠΉ ΠΌΠ΅ΡΠΎΠ΄ Π΄Π»Ρ ΡΠ»ΡΡΡΠ΅Π½ΠΈΡ ΠΏΠΎΠΈΡΠΊΠ° ΠΈ ΠΏΡΠΎΡΠΌΠΎΡΡΠ° Π½Π° Π²Π΅Π±-ΡΠ°ΠΉΡΠ°Ρ
.ΠΠ°ΠΊΠ»ΡΡΠ΅Π½ΠΈΠ΅. ΠΡΠΈ ΠΏΠΎΠΌΠΎΡΠΈ Π³Π΅Π½Π΅ΡΠ°ΡΠΈΠ²Π½ΠΎ-ΡΠΎΡΡΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎΠΉ ΡΠ΅ΡΠΈ Π±ΡΠ» ΡΠΈΠ½ΡΠ΅Π·ΠΈΡΠΎΠ²Π°Π½ Π½Π°Π±ΠΎΡ Π΄Π°Π½Π½ΡΡ
ΡΠΎΠ²Π°ΡΠΎΠ² ΡΠ»Π΅ΠΊΡΡΠΎΠ½Π½ΠΎΠΉ ΠΊΠΎΠΌΠΌΠ΅ΡΡΠΈΠΈ, ΠΏΡΠΎΠΈΠ·Π²Π΅Π΄Π΅Π½Π° Π΅Π³ΠΎ Π°Π½Π½ΠΎΡΠ°ΡΠΈΡ, ΠΏΠΎΡΡΡΠΎΠ΅Π½Ρ Π½Π΅ΠΉΡΠΎΠ½Π½ΡΠ΅ ΡΠ΅ΡΠΈ Π΄Π»Ρ ΡΠ°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΡ ΡΠΎΡΡΠ°Π²Π° ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ, ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½ΠΎ ΡΡΠ°Π²Π½Π΅Π½ΠΈΠ΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠΎΠ². Π Π΅Π·ΡΠ»ΡΡΠ°ΡΡ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ, ΡΡΠΎ Π½ΠΎΠ²ΡΠΉ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ Π΄Π»Ρ ΡΠ°ΡΠΏΠΎΠ·Π½Π°Π²Π°Π½ΠΈΡ ΡΠΊΠ°Π½ΠΈ ΠΏΡΠ΅Π΄ΠΌΠ΅ΡΠΎΠ² ΠΎΠ΄Π΅ΠΆΠ΄Ρ ΠΎΠ±Π»Π°Π΄Π°Π΅Ρ Π²ΡΡΠΎΠΊΠΎΠΉ ΡΠΎΡΠ½ΠΎΡΡΡΡ Π² ΡΡΠ°Π²Π½Π΅Π½ΠΈΠΈ Ρ ΡΠΆΠ΅ ΠΈΠ·Π²Π΅ΡΡΠ½ΡΠΌΠΈ ΠΌΠ΅ΡΠΎΠ΄Π°ΠΌΠΈ. ΠΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΠΎΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΌΠΎΠ΄Π΅Π»ΠΈ Π²Π½ΠΈΠΌΠ°Π½ΠΈΡ ΡΠ°ΠΊΠΆΠ΅ Π΄Π°Π΅Ρ Ρ
ΠΎΡΠΎΡΠΈΠ΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡ, ΡΡΠΎ ΠΎΡΡΠ°ΠΆΠ°Π΅ΡΡΡ Π² ΡΠ»ΡΡΡΠ΅Π½ΠΈΠΈ ΠΌΠ΅ΡΡΠΈΠΊ
Content Based Image Retrieval (CBIR) in Remote Clinical Diagnosis and Healthcare
Content-Based Image Retrieval (CBIR) locates, retrieves and displays images
alike to one given as a query, using a set of features. It demands accessible
data in medical archives and from medical equipment, to infer meaning after
some processing. A problem similar in some sense to the target image can aid
clinicians. CBIR complements text-based retrieval and improves evidence-based
diagnosis, administration, teaching, and research in healthcare. It facilitates
visual/automatic diagnosis and decision-making in real-time remote
consultation/screening, store-and-forward tests, home care assistance and
overall patient surveillance. Metrics help comparing visual data and improve
diagnostic. Specially designed architectures can benefit from the application
scenario. CBIR use calls for file storage standardization, querying procedures,
efficient image transmission, realistic databases, global availability, access
simplicity, and Internet-based structures. This chapter recommends important
and complex aspects required to handle visual content in healthcare.Comment: 28 pages, 6 figures, Book Chapter from "Encyclopedia of E-Health and
Telemedicine
Human-Centered Content-Based Image Retrieval
Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image Retrieval (CBIR). In contrast with most purely technological approaches, the thesis Human-Centered Content-Based Image Retrieval approaches the problem from a human/user centered perspective. Psychophysical experiments were conducted in which people were asked to categorize colors. The data gathered from these experiments was fed to a Fast Exact Euclidean Distance (FEED) transform (Schouten & Van den Broek, 2004), which enabled the segmentation of color space based on human perception (Van den Broek et al., 2008). This unique color space segementation was exploited for texture analysis and image segmentation, and subsequently for full-featured CBIR. In addition, a unique CBIR-benchmark was developed (Van den Broek et al., 2004, 2005). This benchmark was used to explore what and how several parameters (e.g., color and distance measures) of the CBIR process influence retrieval results. In contrast with other research, users judgements were assigned as metric. The online IR and CBIR system Multimedia for Art Retrieval (M4ART) (URL: http://www.m4art.org) has been (partly) founded on the techniques discussed in this thesis. References: - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2004). The utilization of human color categorization for content-based image retrieval. Proceedings of SPIE (Human Vision and Electronic Imaging), 5292, 351-362. [see also Chapter 7] - Broek, E.L. van den, Kisters, P.M.F., and Vuurpijl, L.G. (2005). Content-Based Image Retrieval Benchmarking: Utilizing Color Categories and Color Distributions. Journal of Imaging Science and Technology, 49(3), 293-301. [see also Chapter 8] - Broek, E.L. van den, Schouten, Th.E., and Kisters, P.M.F. (2008). Modeling Human Color Categorization. Pattern Recognition Letters, 29(8), 1136-1144. [see also Chapter 5] - Schouten, Th.E. and Broek, E.L. van den (2004). Fast Exact Euclidean Distance (FEED) transformation. In J. Kittler, M. Petrou, and M. Nixon (Eds.), Proceedings of the 17th IEEE International Conference on Pattern Recognition (ICPR 2004), Vol 3, p. 594-597. August 23-26, Cambridge - United Kingdom. [see also Appendix C
Segmentasi Citra pada Robot Sepak Bola Beroda Menggunakan Multilayer Neural Network dan Fitur Warna HSV
Robot sepak bola beroda merupakan robot beroda yang dikembangkan untuk bermain sepak bola secara full autonomous. Robot bertanding secara tim melawan tim lain pada lapangan indoor yang telah disesuaikan ukurannya. Selama pertandingan, tidak diperbolehkan adanya campur tangan manusia. Pada robot, telah di pasang sebuah kamera dengan tujuan untukΒ menemukan di mana objek penting berada. Salah satu tahapan sebelum mendeteksi objek adalah segmentasi. Segmentasi citra bertujuan untuk memisahkan objek dengan latar belakang atau membagi citra ke dalam beberapa daerah dengan setiap daerah memiliki kemiripan atribut. Salah satu cara untuk melakukan segmentasi citra adalah dengan mengklasifikasikan tiap piksel pada citra sebagai objek tertentu maupun latar belakang. Pada penelitian ini, dilakukan klasifikasi tiap piksel pada ruang warna HSV menjadi 6 kelas. Yaitu: kawan (cyan), lawan (magenta), lapangan (hijau), garis lapangan (putih), bola (orange), dan objek lain (hitam). Proses klasifikasi dilakukan dengan menerapkan model Multilayer Neural Network. Kemudian hasil klasifikasi tersebut digunakan untuk membangun lookup table yang akan digunakan untuk klasifikasi tiap piksel warna secara cepat pada komputer robot. Dari hasil uji coba dan fine tuning terhadap hyperparameter dan arsitektur pada multilayer neural network, didapatkan nilai error rata-rata terkecil yaitu 0.16%. Kemudian dari evaluasi hasil segmentasi, diperoleh error rata-rata sebesar 19.37%
Texture Structure Analysis
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established.Dissertation/ThesisPh.D. Electrical Engineering 201
A generic framework for colour texture segmentation
This thesis proposes a novel method to combine the colour and the texture for colour texture segmentation. The objective of this research work is to derive a framework for colour texture segmentation and to determine the contribution of colour in colour texture analysis. The colour texture processing is based on the feature extraction from colour-textured images. The texture features were obtained from the luminance plane along with the colour features from the chrominance planes. Based on the above mentioned approach, a method was developed for colour texture segmentation. The proposed method unifies colour and texture features to solve the colour texture segmentation problem. Two of the grey scale texture analysis techniques, Local Binary Pattern (LBP) and Discrete Cosine Transform (DCT) based filter approach were extended to colour images. An unsupervised fc-means clustering was used to cluster pixels in the chrominance planes. Non-parametric test was used to test the similarity between colour texture regions. An unsupervised texture segmentation method was followed to obtain the segmented image. The evaluation of the segmentation was based on the ROC curves. A quantitative estimation of colour and texture performance in segmentation was presented. The use of different colour spaces was also investigated in this study. The proposed method was tested using different mosaic and natural images obtained from VisTex and other predominant image database used in computer vision. The applications for the proposed colour texture segmentation method are, Irish Script On Screen (ISOS) images for the segmentation of the colour textured regions in the document, skin cancer images to identify the diseased area, and Sediment Profile Imagery (SPI) to segment underwater images. The inclusion of colour and texture as distributions of regions provided a good discrimination of the colour and the texture. The results indicated that the incorporation of colour information enhanced the texture analysis techniques and the methodology proved effective and efficient
Texture and Colour in Image Analysis
Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews
- β¦