5 research outputs found

    Ligand-Based Virtual Screening Using Bayesian Inference Network and Reweighted Fragments

    Get PDF
    Many of the similarity-based virtual screening approaches assume that molecular fragments that are not related to the biological activity carry the same weight as the important ones. This was the reason that led to the use of Bayesian networks as an alternative to existing tools for similarity-based virtual screening. In our recent work, the retrieval performance of the Bayesian inference network (BIN) was observed to improve significantly when molecular fragments were reweighted using the relevance feedback information. In this paper, a set of active reference structures were used to reweight the fragments in the reference structure. In this approach, higher weights were assigned to those fragments that occur more frequently in the set of active reference structures while others were penalized. Simulated virtual screening experiments with MDL Drug Data Report datasets showed that the proposed approach significantly improved the retrieval effectiveness of ligand-based virtual screening, especially when the active molecules being sought had a high degree of structural heterogeneity

    CONTENT BASED IMAGE RETRIEVAL (CBIR) SYSTEM

    Get PDF
    Advancement in hardware and telecommunication technology has boosted up creation and distribution of digital visual content. However this rapid growth of visual content creations has not been matched by the simultaneous emergence of technologies to support efficient image analysis and retrieval. Although there are attempt to solve this problem by using meta-data text annotation but this approach are not practical when it come to the large number of data collection. This system used 7 different feature vectors that are focusing on 3 main low level feature groups (color, shape and texture). This system will use the image that the user feed and search the similar images in the database that had similar feature by considering the threshold value. One of the most important aspects in CBIR is to determine the correct threshold value. Setting the correct threshold value is important in CBIR because setting it too low will result in less image being retrieve that might exclude relevant data. Setting to high threshold value might result in irrelevant data to be retrieved and increase the search time for image retrieval. Result show that this project able to increase the image accuracy to average 70% by combining 7 different feature vector at correct threshold value. ii

    CONTENT BASED IMAGE RETRIEVAL (CBIR) SYSTEM

    Get PDF
    Advancement in hardware and telecommunication technology has boosted up creation and distribution of digital visual content. However this rapid growth of visual content creations has not been matched by the simultaneous emergence of technologies to support efficient image analysis and retrieval. Although there are attempt to solve this problem by using meta-data text annotation but this approach are not practical when it come to the large number of data collection. This system used 7 different feature vectors that are focusing on 3 main low level feature groups (color, shape and texture). This system will use the image that the user feed and search the similar images in the database that had similar feature by considering the threshold value. One of the most important aspects in CBIR is to determine the correct threshold value. Setting the correct threshold value is important in CBIR because setting it too low will result in less image being retrieve that might exclude relevant data. Setting to high threshold value might result in irrelevant data to be retrieved and increase the search time for image retrieval. Result show that this project able to increase the image accuracy to average 70% by combining 7 different feature vector at correct threshold value. ii

    Interactive content-based image retrieval using relevance feedback

    Full text link

    Image retrieval using automatic region tagging

    Get PDF
    The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions
    corecore