4 research outputs found

    Using Text Surrounding Method to Enhance Retrieval of Online Images by Google Search Engine

    Get PDF
    Purpose: the current research aimed to compare the effectiveness of various tags and codes for retrieving images from the Google. Design/methodology: selected images with different characteristics in a registered domain were carefully studied. The exception was that special conceptual features have been apportioned for each group of images separately. In this regard, each image group surrounding texts was dissimilar. Images were allocated with captionsincluding language in Farsi and English, alt text, image title, file name, free and controlled languages and appropriation text to images properties. Findings: allocating texts to images on a website causes Google to retrieve more images. Chi-square test for identification of significant differences among retrieved images in 5 Codes and revealed that in different codes, various numbers of images that were retrieved were significantly different. Caption allocation in English proved to have the best effect in retrieving images in the study sample, whereas file name had less effect in image retrieval ranking. Results of the Kruskal-Wallis test to assess the group differences in 5 codes revealed that differences were significant. Originality/Value: This paper tries to recall the importance of some elements which a search engine like Google may consider in indexing and retrieval of images. Widespread use of image tagging on the web enables Google and also other search engines to successfully retrieve images

    Context-based multimedia semantics modelling and representation

    Get PDF
    The evolution of the World Wide Web, increase in processing power, and more network bandwidth have contributed to the proliferation of digital multimedia data. Since multimedia data has become a critical resource in many organisations, there is an increasing need to gain efficient access to data, in order to share, extract knowledge, and ultimately use the knowledge to inform business decisions. Existing methods for multimedia semantic understanding are limited to the computable low-level features; which raises the question of how to identify and represent the high-level semantic knowledge in multimedia resources.In order to bridge the semantic gap between multimedia low-level features and high-level human perception, this thesis seeks to identify the possible contextual dimensions in multimedia resources to help in semantic understanding and organisation. This thesis investigates the use of contextual knowledge to organise and represent the semantics of multimedia data aimed at efficient and effective multimedia content-based semantic retrieval.A mixed methods research approach incorporating both Design Science Research and Formal Methods for investigation and evaluation was adopted. A critical review of current approaches for multimedia semantic retrieval was undertaken and various shortcomings identified. The objectives for a solution were defined which led to the design, development, and formalisation of a context-based model for multimedia semantic understanding and organisation. The model relies on the identification of different contextual dimensions in multimedia resources to aggregate meaning and facilitate semantic representation, knowledge sharing and reuse. A prototype system for multimedia annotation, CONMAN was built to demonstrate aspects of the model and validate the research hypothesis, H₁.Towards providing richer and clearer semantic representation of multimedia content, the original contributions of this thesis to Information Science include: (a) a novel framework and formalised model for organising and representing the semantics of heterogeneous visual data; and (b) a novel S-Space model that is aimed at visual information semantic organisation and discovery, and forms the foundations for automatic video semantic understanding
    corecore