814 research outputs found

    Effectiveness of MPEG-7 Color Features in Clothing Retrieval

    Full text link
    Clothing is a human used to cover the body. Clothing consist of dress, pants, skirts, and others. Clothing usually consists of various colors or a combination of several colors. Colors become one of the important reference used by humans in determining or looking for clothing according to their wishes. Color is one of the features that fit the human vision. Content Based Image Retrieval (CBIR) is a technique in Image Retrieval that give index to an image based on the characteristics contained in image such as color, shape, and texture. CBIR can make it easier to find something because it helps the grouping process on image based on its characteristic. In this case CBIR is used for the searching process of Muslim fashion based on the color features. The color used in this research is the color descriptor MPEG-7 which is Scalable Color Descriptor (SCD) and Dominant Color Descriptor (DCD). The SCD color feature displays the overall color proportion of the image, while the DCD displays the most dominant color in the image. For each image of Muslim women\u27s clothing, the extraction process utilize SCD and DCD. This study used 150 images of Muslim women\u27s clothing as a dataset consistingclass of red, blue, yellow, green and brown. Each class consists of 30 images. The similarity between the image features is measured using the eucludian distance. This study used human perception in viewing the color of clothing.The effectiveness is calculated for the color features of SCD and DCD adjusted to the human subjective similarity. Based on the simulation of effectiveness DCD result system gives higher value than SCD

    Effectiveness of MPEG-7 Color Features in Clothing Retrieval

    Get PDF
    Clothing is a human used to cover the body. Clothing consist of dress, pants, skirts, and others. Clothing usually consists of various colors or a combination of several colors. Colors become one of the important reference used by humans in determining or looking for clothing according to their wishes. Color is one of the features that fit the human vision. Content Based Image Retrieval (CBIR) is a technique in Image Retrieval that give index to an image based on the characteristics contained in image such as color, shape, and texture. CBIR can make it easier to find something because it helps the grouping process on image based on its characteristic. In this case CBIR is used for the searching process of Muslim fashion based on the color features. The color used in this research is the color descriptor MPEG-7 which is Scalable Color Descriptor (SCD) and Dominant Color Descriptor (DCD). The SCD color feature displays the overall color proportion of the image, while the DCD displays the most dominant color in the image. For each image of Muslim women's clothing, the extraction process utilize SCD and DCD. This study used 150 images of Muslim women's clothing as a dataset consistingclass of red, blue, yellow, green and brown. Each class consists of 30 images. The similarity between the image features is measured using the eucludian distance. This study used human perception in viewing the color of clothing.The effectiveness is calculated for the color features of SCD and DCD adjusted to the human subjective similarity. Based on the simulation of effectiveness DCD result system gives higher value than SCD

    Identifying person re-occurrences for personal photo management applications

    Get PDF
    Automatic identification of "who" is present in individual digital images within a photo management system using only content-based analysis is an extremely difficult problem. The authors present a system which enables identification of person reoccurrences within a personal photo management application by combining image content-based analysis tools with context data from image capture. This combined system employs automatic face detection and body-patch matching techniques, which collectively facilitate identifying person re-occurrences within images grouped into events based on context data. The authors introduce a face detection approach combining a histogram-based skin detection model and a modified BDF face detection method to detect multiple frontal faces in colour images. Corresponding body patches are then automatically segmented relative to the size, location and orientation of the detected faces in the image. The authors investigate the suitability of using different colour descriptors, including MPEG-7 colour descriptors, color coherent vectors (CCV) and color correlograms for effective body-patch matching. The system has been successfully integrated into the MediAssist platform, a prototype Web-based system for personal photo management, and runs on over 13000 personal photos

    Techniques for effective and efficient fire detection from social media images

    Get PDF
    Social media could provide valuable information to support decision making in crisis management, such as in accidents, explosions and fires. However, much of the data from social media are images, which are uploaded in a rate that makes it impossible for human beings to analyze them. Despite the many works on image analysis, there are no fire detection studies on social media. To fill this gap, we propose the use and evaluation of a broad set of content-based image retrieval and classification techniques for fire detection. Our main contributions are: (i) the development of the Fast-Fire Detection method (FFDnR), which combines feature extractor and evaluation functions to support instance-based learning, (ii) the construction of an annotated set of images with ground-truth depicting fire occurrences -- the FlickrFire dataset, and (iii) the evaluation of 36 efficient image descriptors for fire detection. Using real data from Flickr, our results showed that FFDnR was able to achieve a precision for fire detection comparable to that of human annotators. Therefore, our work shall provide a solid basis for further developments on monitoring images from social media.Comment: 12 pages, Proceedings of the International Conference on Enterprise Information Systems. Specifically: Marcos Bedo, Gustavo Blanco, Willian Oliveira, Mirela Cazzolato, Alceu Costa, Jose Rodrigues, Agma Traina, Caetano Traina, 2015, Techniques for effective and efficient fire detection from social media images, ICEIS, 34-4

    The aceToolbox: low-level audiovisual feature extraction for retrieval and classification

    Get PDF
    In this paper we present an overview of a software platform that has been developed within the aceMedia project, termed the aceToolbox, that provides global and local lowlevel feature extraction from audio-visual content. The toolbox is based on the MPEG-7 eXperimental Model (XM), with extensions to provide descriptor extraction from arbitrarily shaped image segments, thereby supporting local descriptors reflecting real image content. We describe the architecture of the toolbox as well as providing an overview of the descriptors supported to date. We also briefly describe the segmentation algorithm provided. We then demonstrate the usefulness of the toolbox in the context of two different content processing scenarios: similarity-based retrieval in large collections and scene-level classification of still images

    Combining textual and visual information processing for interactive video retrieval: SCHEMA's participation in TRECVID 2004

    Get PDF
    In this paper, the two different applications based on the Schema Reference System that were developed by the SCHEMA NoE for participation to the search task of TRECVID 2004 are illustrated. The first application, named ”Schema-Text”, is an interactive retrieval application that employs only textual information while the second one, named ”Schema-XM”, is an extension of the former, employing algorithms and methods for combining textual, visual and higher level information. Two runs for each application were submitted, I A 2 SCHEMA-Text 3, I A 2 SCHEMA-Text 4 for Schema-Text and I A 2 SCHEMA-XM 1, I A 2 SCHEMA-XM 2 for Schema-XM. The comparison of these two applications in terms of retrieval efficiency revealed that the combination of information from different data sources can provide higher efficiency for retrieval systems. Experimental testing additionally revealed that initially performing a text-based query and subsequently proceeding with visual similarity search using one of the returned relevant keyframes as an example image is a good scheme for combining visual and textual information
    corecore