56,767 research outputs found

    Constructing an Interaction Behavior Model for Web Image Search

    Full text link
    User interaction behavior is a valuable source of implicit relevance feedback. In Web image search a different type of search result presentation is used than in general Web search, which leads to different interaction mechanisms and user behavior. For example, image search results are self-contained, so that users do not need to click the results to view the landing page as in general Web search, which generates sparse click data. Also, two-dimensional result placement instead of a linear result list makes browsing behaviors more complex. Thus, it is hard to apply standard user behavior models (e.g., click models) developed for general Web search to Web image search. In this paper, we conduct a comprehensive image search user behavior analysis using data from a lab-based user study as well as data from a commercial search log. We then propose a novel interaction behavior model, called grid-based user browsing model (GUBM), whose design is motivated by observations from our data analysis. GUBM can both capture users' interaction behavior, including cursor hovering, and alleviate position bias. The advantages of GUBM are two-fold: (1) It is based on an unsupervised learning method and does not need manually annotated data for training. (2) It is based on user interaction features on search engine result pages (SERPs) and is easily transferable to other scenarios that have a grid-based interface such as video search engines. We conduct extensive experiments to test the performance of our model using a large-scale commercial image search log. Experimental results show that in terms of behavior prediction (perplexity), and topical relevance and image quality (normalized discounted cumulative gain (NDCG)), GUBM outperforms state-of-the-art baseline models as well as the original ranking. We make the implementation of GUBM and related datasets publicly available for future studies.Comment: 10 page

    CSISE: cloud-based semantic image search engine

    Get PDF
    Title from PDF of title page, viewed on March 27, 2014Thesis advisor: Yugyung LeeVitaIncludes bibliographical references (pages 53-56)Thesis (M. S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013Due to rapid exponential growth in data, a couple of challenges we face today are how to handle big data and analyze large data sets. An IBM study showed the amount of data created in the last two years alone is 90% of the data in the world today. We have especially seen the exponential growth of images on the Web, e.g., more than 6 billion in Flickr, 1.5 billion in Google image engine, and more than 1 billon images in Instagram [1]. Since big data are not only a matter of a size, but are also heterogeneous types and sources of data, image searching with big data may not be scalable in practical settings. We envision Cloud computing as a new way to transform the big data challenge into a great opportunity. In this thesis, we intend to perform an efficient and accurate classification of a large collection of images using Cloud computing, which in turn supports semantic image searching. A novel approach with enhanced accuracy has been proposed to utilize semantic technology to classify images by analyzing both metadata and image data types. A two-level classification model was designed (i) semantic classification was performed on a metadata of images using TF-IDF, and (ii) image classification was performed using a hybrid image processing model combined with Euclidean distance and SURF FLANN measurements. A Cloud-based Semantic Image Search Engine (CSISE) is also developed to search an image using the proposed semantic model with the dynamic image repository by connecting online image search engines that include Google Image Search, Flickr, and Picasa. A series of experiments have been performed in a large-scale Hadoop environment using IBM's cloud on over half a million logo images of 76 types. The experimental results show that the performance of the CSISE engine (based on the proposed method) is comparable to the popular online image search engines as well as accurate with a higher rate (average precision of 71%) than existing approachesAbstract -- Contents -- Illustrations -- Tables -- Acknowledgements - Introduction -- Related work -- Cloud-based semantic image search engine model -- Cloud-based semantic image search engine (CSISE) implementation -- Experimental results and evaluation -- Conclusion and future work - Reference

    Personal rights management (PRM) : enabling privacy rights in digital online media content

    Get PDF
    With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.Wir stellen ein System zur Wahrnehmung des Rechts am eigenen Bild bei der Veröffentlichung digitaler Fotos, zum Beispiel von Handykameras, im Internet vor. Zur Entdeckung der Veröffentlichung schlagen wir ein Watermarking-Verfahren vor, welches das Auffinden der Bilder durch die potentiell abgebildeten Personen ermöglicht, ohne die Rechte des Fotografen einzuschränken

    Complete Semantics to empower Touristic Service Providers

    Full text link
    The tourism industry has a significant impact on the world's economy, contributes 10.2% of the world's gross domestic product in 2016. It becomes a very competitive industry, where having a strong online presence is an essential aspect for business success. To achieve this goal, the proper usage of latest Web technologies, particularly schema.org annotations is crucial. In this paper, we present our effort to improve the online visibility of touristic service providers in the region of Tyrol, Austria, by creating and deploying a substantial amount of semantic annotations according to schema.org, a widely used vocabulary for structured data on the Web. We started our work from Tourismusverband (TVB) Mayrhofen-Hippach and all touristic service providers in the Mayrhofen-Hippach region and applied the same approach to other TVBs and regions, as well as other use cases. The rationale for doing this is straightforward. Having schema.org annotations enables search engines to understand the content better, and provide better results for end users, as well as enables various intelligent applications to utilize them. As a direct consequence, the region of Tyrol and its touristic service increase their online visibility and decrease the dependency on intermediaries, i.e. Online Travel Agency (OTA).Comment: 18 pages, 6 figure

    CAMEL: Concept Annotated iMagE Libraries

    Get PDF
    Copyright 2001 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic electronic or print reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. http://dx.doi.org/10.1117/12.410975The problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the internet, andmany important applications require searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor acceptance are poor retrieval quality and usability. In this paper, we introduce the CAMEL system—an acronym for Concept Annotated iMagE Libraries—as an effort to address both of the above problems. The CAMEL system provides and easy-to-use, and yet powerful, text-only query interface, which allows users to search for images based on visual concepts, identified by specifying relevant keywords. Conceptually, CAMEL annotates images with the visual concepts that are relevant to them. In practice, CAMEL defines visual concepts by looking at sample images off-line and extracting their relevant visual features. Once defined, such visual concepts can be used to search for relevant images on the fly, using content-based search methods. The visual concepts are stored in a Concept Library and are represented by an associated set of wavelet features, which in our implementation were extracted by the WALRUS image querying system. Even though the CAMEL framework applies independently of the underlying query engine, for our prototype we have chosenWALRUS as a back-end, due to its ability to extract and query with image region features. CAMEL improves retrieval quality because it allows experts to build very accurate representations of visual concepts that can be used even by novice users. At the same time, CAMEL improves usability by supporting the familiar text-only interface currently used by most search engines on the web. Both improvements represent a departure from traditional approaches to improving image query systems—instead of focusing on query execution, we emphasize query specification by allowing simpler and yet more precise query specification

    Accessibility-based reranking in multimedia search engines

    Get PDF
    Traditional multimedia search engines retrieve results based mostly on the query submitted by the user, or using a log of previous searches to provide personalized results, while not considering the accessibility of the results for users with vision or other types of impairments. In this paper, a novel approach is presented which incorporates the accessibility of images for users with various vision impairments, such as color blindness, cataract and glaucoma, in order to rerank the results of an image search engine. The accessibility of individual images is measured through the use of vision simulation filters. Multi-objective optimization techniques utilizing the image accessibility scores are used to handle users with multiple vision impairments, while the impairment profile of a specific user is used to select one from the Pareto-optimal solutions. The proposed approach has been tested with two image datasets, using both simulated and real impaired users, and the results verify its applicability. Although the proposed method has been used for vision accessibility-based reranking, it can also be extended for other types of personalization context
    • …
    corecore