1,774 research outputs found

    An Enhanced Texture-Based Feature Extraction Approach for Classification of Biomedical Images of CT-Scan of Lungs

    Get PDF
    Content Based Image Retrieval (CBIR) techniques based on texture have gained a lot of popularity in recent times. In the proposed work, a feature vector is obtained by concatenation of features extracted from local mesh peak valley edge pattern (LMePVEP) technique; a dynamic threshold based local mesh ternary pattern technique and texture of the image in five different directions. The concatenated feature vector is then used to classify images of two datasets viz. Emphysema dataset and Early Lung Cancer Action Program (ELCAP) lung database. The proposed framework has improved the accuracy by 12.56%, 9.71% and 7.01% in average for data set 1 and 9.37%, 8.99% and 7.63% in average for dataset 2 over three popular algorithms used for image retrieval

    Overview of the 2005 cross-language image retrieval track (ImageCLEF)

    Get PDF
    The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Four tasks were offered in the ImageCLEF track: a ad-hoc retrieval from an historic photographic collection, ad-hoc retrieval from a medical collection, an automatic image annotation task, and a user-centered (interactive) evaluation task that is explained in the iCLEF summary. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main fndings

    BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis

    Get PDF
    Emergency events involving fire are potentially harmful, demanding a fast and precise decision making. The use of crowdsourcing image and videos on crisis management systems can aid in these situations by providing more information than verbal/textual descriptions. Due to the usual high volume of data, automatic solutions need to discard non-relevant content without losing relevant information. There are several methods for fire detection on video using color-based models. However, they are not adequate for still image processing, because they can suffer on high false-positive results. These methods also suffer from parameters with little physical meaning, which makes fine tuning a difficult task. In this context, we propose a novel fire detection method for still images that uses classification based on color features combined with texture classification on superpixel regions. Our method uses a reduced number of parameters if compared to previous works, easing the process of fine tuning the method. Results show the effectiveness of our method of reducing false-positives while its precision remains compatible with the state-of-the-art methods.Comment: 8 pages, Proceedings of the 28th SIBGRAPI Conference on Graphics, Patterns and Images, IEEE Pres

    aZIBO Shape Descriptor for Monitoring Tool Wear in Milling

    Get PDF
    El objetivo de este trabajo es estimar eficientemente el desgaste del mecanizado de metales y mejorar las operaciones de sustitución de la herramienta. El procesamiento de imágenes y la clasificación se utilizan para automatizar la toma de decisiones sobre el tiempo adecuado para el reemplazo dela herramienta. Específicamente, el descriptor de forma aZIBO (momentos absolutos de Zernike con orientación de contorno invariable) se ha utilizado para caracterizar el desgaste de la plaquita y garantizar su uso óptimo. Se ha creado un conjunto de datos compuesto por 577 regiones con diferentes niveles de desgaste. Se han llevado a cabo dos procesos de clasificación diferentes: el primero con tres clases diferentes (desgaste bajo, medio y alto -L, M y H, respectivamente) y el segundo con sólo dos clases: Low (L) y High (H). La clasificación se llevó a cabo utilizando por un lado kNN con cinco distancias diferentes y cinco valores de k y, por otra parte, una máquina de vectores de soporte (SVM). El rendimiento de aZIBO se ha comparado con descriptores de forma clásicos como los momentos de Hu y Flusser. Los supera, obteniendo tasas de éxito de hasta el 91,33% para la clasificación L-H y 90,12% para la clasificación L-M-H

    Indian Monuments Classification using Support Vector Machine

    Get PDF
    Recently, Content-Based Image Retrieval is a widely popular and efficient searching and indexing approach used by knowledge seekers. Use of images by e-commerce sites, by product and by service industries is not new nowadays. Travel and tourism are the largest service industries in India. Every year people visit tourist places and upload pictures of their visit on social networking sites or share via the mobile device with friends and relatives. Classification of the monuments is helpful to hoteliers for the development of a new hotel with state of the art amenities, to travel service providers, to restaurant owners, to government agencies for security, etc.. The proposed system had extracted features and classified the Indian monuments visited by the tourists based on the linear Support Vector Machine (SVM). The proposed system was divided into 3 main phases: preprocessing, feature vector creation and classification. The extracted features are based on Local Binary Pattern, Histogram, Co-occurrence Matrix and Canny Edge Detection methods.  Once the feature vector had been constructed, classification was   performed using Linear SVM. The Database of 10 popular Indian monuments was generated with 50 images for each class. The proposed system is implemented in MATLAB and achieves very high accuracy. The proposed system was also tested on other popular benchmark databases

    Texture features extraction based on GLCM for face retrieval system

    Get PDF
    Texture features play an important role in most image retrieval techniques to obtain results of high accuracy. In this work, the face image retrieval method considering texture analysis and statistical features has been proposed. Textile features can also be extracted using the GLCM tool. In this research, the GLCM calculation method involves two phases, first: some of the previous image processing techniques work together to get the best results to determine the big object of the face image (center of face image) then, the gray level co-occurrence matrix GLCM is computed for gray face image and then some statistical texture features with second-order are extracted. In the second phase, the facial texture features are retrieved by finding the minimum distance between texture features of an unknown face image with the texture features of face images that are stored in the database system. The experimental results show that the proposed method is capable to achieve high accuracy degree in face image retrieval
    corecore