370 research outputs found

    Autoencoding the Retrieval Relevance of Medical Images

    Full text link
    Content-based image retrieval (CBIR) of medical images is a crucial task that can contribute to a more reliable diagnosis if applied to big data. Recent advances in feature extraction and classification have enormously improved CBIR results for digital images. However, considering the increasing accessibility of big data in medical imaging, we are still in need of reducing both memory requirements and computational expenses of image retrieval systems. This work proposes to exclude the features of image blocks that exhibit a low encoding error when learned by a n/p/nn/p/n autoencoder (p ⁣< ⁣np\!<\!n). We examine the histogram of autoendcoding errors of image blocks for each image class to facilitate the decision which image regions, or roughly what percentage of an image perhaps, shall be declared relevant for the retrieval task. This leads to reduction of feature dimensionality and speeds up the retrieval process. To validate the proposed scheme, we employ local binary patterns (LBP) and support vector machines (SVM) which are both well-established approaches in CBIR research community. As well, we use IRMA dataset with 14,410 x-ray images as test data. The results show that the dimensionality of annotated feature vectors can be reduced by up to 50% resulting in speedups greater than 27% at expense of less than 1% decrease in the accuracy of retrieval when validating the precision and recall of the top 20 hits.Comment: To appear in proceedings of The 5th International Conference on Image Processing Theory, Tools and Applications (IPTA'15), Nov 10-13, 2015, Orleans, Franc

    Hybrid image representation methods for automatic image annotation: a survey

    Get PDF
    In most automatic image annotation systems, images are represented with low level features using either global methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is beneficial in annotating images. In this paper, we provide a survey on automatic image annotation techniques according to one aspect: feature extraction, and, in order to complement existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation

    Vision systems with the human in the loop

    Get PDF
    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed

    An adaptive technique for content-based image retrieval

    Get PDF
    We discuss an adaptive approach towards Content-Based Image Retrieval. It is based on the Ostensive Model of developing information needs—a special kind of relevance feedback model that learns from implicit user feedback and adds a temporal notion to relevance. The ostensive approach supports content-assisted browsing through visualising the interaction by adding user-selected images to a browsing path, which ends with a set of system recommendations. The suggestions are based on an adaptive query learning scheme, in which the query is learnt from previously selected images. Our approach is an adaptation of the original Ostensive Model based on textual features only, to include content-based features to characterise images. In the proposed scheme textual and colour features are combined using the Dempster-Shafer theory of evidence combination. Results from a user-centred, work-task oriented evaluation show that the ostensive interface is preferred over a traditional interface with manual query facilities. This is due to its ability to adapt to the user's need, its intuitiveness and the fluid way in which it operates. Studying and comparing the nature of the underlying information need, it emerges that our approach elicits changes in the user's need based on the interaction, and is successful in adapting the retrieval to match the changes. In addition, a preliminary study of the retrieval performance of the ostensive relevance feedback scheme shows that it can outperform a standard relevance feedback strategy in terms of image recall in category search

    Efficient Image Retrieval Based on Texture Features Using Concept of Histogram

    Get PDF
    Image retrieval is fast growing research oriented area now days. As information retrieval plays a major role in transmitting knowledge both in the forms of text and images, image retrieval got a major focus. In this paper we integrated the Histogram Intersection measure method to compare the query image with database images; by this approach we can measure over-all similarity between images, by incorporating all local properties of the texture histograms of the images through which we proved that our approach in retrieving the image is accurate

    Content Based Image Retrieval Using SVM Algorithm

    Get PDF
    Conventional content-based image retrieval (CBIR) schemes employing relevance feedback may suffer from some problems in the practical applications. First, most ordinary users would like to complete their search in a single interaction especially on the web. Second, it is time consuming and difficult to label a lot of negative examples with sufficient variety. Third, ordinary users may introduce some noisy examples into the query. This correspondence explores solutions to a new issue that image retrieval using unclean positive examples. In the proposed scheme, multiple feature distances are combined to obtain image similarity using classification technology. To handle the noisy positive examples, a new two step strategy is proposed by incorporating the methods of data cleaning and noise tolerant classifier. The extensive experiments carried out on two different real image collections validate the effectiveness of the proposed scheme

    A Query-by-Example Content-Based Image Retrieval System of Non-melanoma Skin Lesions

    Get PDF
    Abstract. This paper proposes a content-based image retrieval system for skin lesion images as a diagnostic aid. The aim is to support decision making by retrieving and displaying relevant past cases visually similar to the one under examination. Skin lesions of five common classes, including two non-melanoma cancer types are used. Colour and texture features are extracted from lesions. Feature selection is achieved by optimising a similarity matching function. Experiments on our database of 208 images are performed and results evaluated.

    Enhancing Automatic Annotation for Optimal Image Retrieval

    Get PDF
    Image search and retrieval based on content is very cumbersome task particularly when the image database is large. The accuracy of the retrieval as well as the processing speed are two important measures used for assessing and comparing the effectiveness of various systems. Text retrieval is more mature and advanced than image content retrieval. In this dissertation, the focus is on converting image content into text tags that can be easily searched using standard search engines where the size and speed issues of the database have been already dealt with. Therefore, image tagging becomes an essential tool for image retrieval from large image databases. Automation of image tagging has received considerable attention by many researchers in recent years. The optimal goal of image description is to automatically annotate images with tags that semantically represent the image content. The speed and accuracy of Image retrieval from large databases are few of the important domains that can benefit from automatic tagging. In this work, several state of the art image classification and image tagging techniques are reviewed. We propose a new self-learning multilayered tagging framework that can address the limitations of current approaches and provide mutual accuracy improvement between the recognition layer and the annotation layer. Our results indicate that the proposed framework can improve the overall accuracy of information retrieval in a variety of image databases

    A Compact Representation of Histopathology Images using Digital Stain Separation & Frequency-Based Encoded Local Projections

    Full text link
    In recent years, histopathology images have been increasingly used as a diagnostic tool in the medical field. The process of accurately diagnosing a biopsy sample requires significant expertise in the field, and as such can be time-consuming and is prone to uncertainty and error. With the advent of digital pathology, using image recognition systems to highlight problem areas or locate similar images can aid pathologists in making quick and accurate diagnoses. In this paper, we specifically consider the encoded local projections (ELP) algorithm, which has previously shown some success as a tool for classification and recognition of histopathology images. We build on the success of the ELP algorithm as a means for image classification and recognition by proposing a modified algorithm which captures the local frequency information of the image. The proposed algorithm estimates local frequencies by quantifying the changes in multiple projections in local windows of greyscale images. By doing so we remove the need to store the full projections, thus significantly reducing the histogram size, and decreasing computation time for image retrieval and classification tasks. Furthermore, we investigate the effectiveness of applying our method to histopathology images which have been digitally separated into their hematoxylin and eosin stain components. The proposed algorithm is tested on the publicly available invasive ductal carcinoma (IDC) data set. The histograms are used to train an SVM to classify the data. The experiments showed that the proposed method outperforms the original ELP algorithm in image retrieval tasks. On classification tasks, the results are found to be comparable to state-of-the-art deep learning methods and better than many handcrafted features from the literature.Comment: Accepted for publication in the International Conference on Image Analysis and Recognition (ICIAR 2019
    corecore