12 research outputs found

    An Integrated Content and Metadata based Retrieval System for Art

    No full text
    In this paper we describe aspects of the Artiste project to develop a distributed content and metadata based analysis, retrieval and navigation system for a number of major European Museums. In particular, after a brief overview of the complete system, we describe the design and evaluation of some of the image analysis algorithms developed to meet the specific requirements of the users from the museums. These include a method for retrievals based on sub images, retrievals based on very low quality images and retrieval using craquelure type

    Colour cluster analysis for pigment identification

    No full text
    This paper presents image processing algorithms designed to analyse the colour CIE Lab histogram of high resolution images of paintings. Three algorithms are illustrated which attempt to identify colour clusters, cluster shapes due to shading and finally to identify pigments. Using the image collection and pigment list of the National Gallery London large numbers of images within a restricted period have been classified with a variety of algorithms. The image descriptors produced were also used with suitable comparison metrics to obtain content-based retrieval of the images

    Colour appearance descriptors for image browsing and retrieval

    No full text
    In this paper, we focus on the development of whole-scene colour appearance descriptors for classification to be used in browsing applications. The descriptors can classify a whole-scene image into various categories of semantically-based colour appearance. Colour appearance is an important feature and has been extensively used in image-analysis, retrieval and classification. By using pre-existing global CIELAB colour histograms, firstly, we try to develop metrics for wholescene colour appearance: “colour strength”, “high/low lightness” and “multicoloured”. Secondly we propose methods using these metrics either alone or combined to classify whole-scene images into five categories of appearance: strong, pastel, dark, pale and multicoloured. Experiments show positive results and that the global colour histogram is actually useful and can be used for whole-scene colour appearance classification. We have also conducted a small-scale human evaluation test on whole-scene colour appearance. The results show, with suitable threshold settings, the proposed methods can describe the whole-scene colour appearance of images close to human classification. The descriptors were tested on thousands of images from various scenes: paintings, natural scenes, objects, photographs and documents. The colour appearance classifications are being integrated into an image browsing system which allows them to also be used to refine browsing

    A new approach for content-based image retrieval for medical applications using low-level image descriptors

    Get PDF
    Content based image retrieval (CBIR) has become an important factor in medical imaging research and is obtaining a great success. More applications still need to be developed to get more powerful systems for better image similarity matching, and as a result getting better image retrieval systems. This research focuses on implementing low-level descriptors to maximize the quality of the retrieval of medical images. Such a research is supposed to set a better result in terms of image similarity matching. In this research a system that uses low-level descriptors is introduced. Three descriptors have been developed and applied in an attempt to increase the accuracy of image matching. The final results showed a qualified system in medical images retrieval specially that the low-level image descriptors have not been used yet in the image similarity matching in the medical field

    Data mining and fusion

    No full text

    Retrieving Ancient Maya Glyphs with Shape Context

    Get PDF
    We introduce an interdisciplinary project for archaeological and computer vision research teams on the analysis of the ancient Maya writing system. Our first task is the automatic retrieval of Maya syllabic glyphs using the Shape Context descriptor. We investigated the effect of several parameters to adapt the shape descriptor given the high complexity of the shapes and their diversity in our data. We propose an improvement in the cost function used to compute similarity between shapes making it more restrictive and precise. Our results are promising, they are analyzed via standard image retrieval measurements

    Analyzing ancient Maya glyph collections with Contextual Shape Descriptors

    Get PDF
    This paper presents an original approach for shape-based analysis of ancient Maya hieroglyphs based on an interdisciplinary collaboration between computer vision and archaeology. Our work is guided by realistic needs of archaeologists and scholars who critically need support for search and retrieval tasks in large Maya imagery collections. Our paper has three main contributions. First, we introduce an overview of our interdisciplinary approach towards the improvement of the documentation, analysis, and preservation of Maya pictographic data. Second, we present an objective evaluation of the performance of two state-of-the-art shape-based contextual descriptors (Shape Context and Generalized Shape Context) in retrieval tasks, using two datasets of syllabic Maya glyphs. Based on the identification of their limitations, we propose a new shape descriptor named HOOSC, which is more robust and suitable for description of Maya hieroglyphs. Third, we present what to our knowledge constitutes the first automatic analysis of visual variability of syllabic glyphs along historical periods and across geographic regions of the ancient Maya world via the HOOSC descriptor. Overall, our approach is promising, as it improves performance on the retrieval task, is successfully validated under an epigraphic viewpoint, and has the potential of offering both novel insights in archaeology and practical solutions for real daily scholar needs

    Content-based image retrieval of museum images

    Get PDF
    Content-based image retrieval (CBIR) is becoming more and more important with the advance of multimedia and imaging technology. Among many retrieval features associated with CBIR, texture retrieval is one of the most difficult. This is mainly because no satisfactory quantitative definition of texture exists at this time, and also because of the complex nature of the texture itself. Another difficult problem in CBIR is query by low-quality images, which means attempts to retrieve images using a poor quality image as a query. Not many content-based retrieval systems have addressed the problem of query by low-quality images. Wavelet analysis is a relatively new and promising tool for signal and image analysis. Its time-scale representation provides both spatial and frequency information, thus giving extra information compared to other image representation schemes. This research aims to address some of the problems of query by texture and query by low quality images by exploiting all the advantages that wavelet analysis has to offer, particularly in the context of museum image collections. A novel query by low-quality images algorithm is presented as a solution to the problem of poor retrieval performance using conventional methods. In the query by texture problem, this thesis provides a comprehensive evaluation on wavelet-based texture method as well as comparison with other techniques. A novel automatic texture segmentation algorithm and an improved block oriented decomposition is proposed for use in query by texture. Finally all the proposed techniques are integrated in a content-based image retrieval application for museum image collections
    corecore