13 research outputs found

    PWIS: Personalized Web Image Search using One-Click Method

    Get PDF
    Personalized Web Image search is the one searching for the particular images of User intention on the Web. For searching images, a user might provide query terms like keyword, image file, or click on few image file, and therefore the system can determine the images similar to the query. The similarity used for search criteria could be Meta tags, color distribution in images, region/shapes attributes, etc. Web-scale image search engines namely Google and Bing searches for images are relying on the surrounding text features. It is highly cumbersome and complicated for the web-scale based image search engines to interpret users search intention only by querying of keywords. This leads to the incorporation of noise and high ambiguity in the search results which are extremely unfit in the context of the users. It's also a necessary mandate for using visual information for solving the problem of ambiguity in the text-based image retrieval scenario. In the case of Google search, search text box will auto complete while user is typing similar added keywords. This method will differ from user intention while searching. So to avoid this kind of faults, it is important to use visual information in order to solve the uncertainty in text-based image retrieval. To retrieve exact matching, and acquire user‟ s intention we can allow them text query with extended or related images as a suggestion. We have proposed an innovative Web image search approach. It only needs the user to click on one query image with minimal effort and images from a pool fetched by text-based search are re-ranked based on both visual and textual contents

    Bayesian learning of inverted Dirichlet mixtures for SVM kernels generation

    Get PDF
    We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification

    Heterogeneous Feature Selection With Multi-Modal Deep Neural Networks and Sparse Group LASSO

    Full text link

    IntentSearch: capturing user intention for internet image search.

    Get PDF
    Liu, Ke.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (leaves 41-46).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Related Work --- p.7Chapter 2.1 --- Keyword Expansion --- p.7Chapter 2.2 --- Content-based Image Search and Visual Expansion --- p.8Chapter 3 --- Algorithm --- p.12Chapter 3.1 --- Overview --- p.12Chapter 3.2 --- Visual Distance Calculation --- p.14Chapter 3.2.1 --- Visual Features --- p.15Chapter 3.2.2 --- Adaptive Weight Schema --- p.17Chapter 3.3 --- Keyword Expansion --- p.18Chapter 3.4 --- Visual Query Expansion --- p.22Chapter 3.5 --- Image Pool Expansion --- p.24Chapter 3.6 --- Textual Feature Combination --- p.26Chapter 4 --- Experimental Evaluation --- p.27Chapter 4.1 --- Dataset --- p.27Chapter 4.2 --- Experiment One: Evaluation with Ground Truth --- p.28Chapter 4.2.1 --- Precisions on Different Steps --- p.28Chapter 4.2.2 --- Accuracy of Keyword Expansion --- p.31Chapter 4.3 --- Experiment Two: User Study --- p.33Chapter 5 --- Conclusion --- p.3

    Computer vision applied to underwater robotics

    Get PDF
    corecore