128 research outputs found
Next Generation of Product Search and Discovery
Online shopping has become an important part of people’s daily life with the rapid development of e-commerce. In some domains such as books, electronics, and CD/DVDs, online shopping has surpassed or even replaced the traditional shopping method. Compared with traditional retailing, e-commerce is information intensive. One of the key factors to succeed in e-business is how to facilitate the consumers’ approaches to discover a product. Conventionally a product search engine based on a keyword search or category browser is provided to help users find the product information they need. The general goal of a product search system is to enable users to quickly locate information of interest and to minimize users’ efforts in search and navigation. In this process human factors play a significant role. Finding product information could be a tricky task and may require an intelligent use of search engines, and a non-trivial navigation of multilayer categories. Searching for useful product information can be frustrating for many users, especially those inexperienced users.
This dissertation focuses on developing a new visual product search system that effectively extracts the properties of unstructured products, and presents the possible items of attraction to users so that the users can quickly locate the ones they would be most likely interested in. We designed and developed a feature extraction algorithm that retains product color and local pattern features, and the experimental evaluation on the benchmark dataset demonstrated that it is robust against common geometric and photometric visual distortions. Besides, instead of ignoring product text information, we investigated and developed a ranking model learned via a unified probabilistic hypergraph that is capable of capturing correlations among product visual content and textual content. Moreover, we proposed and designed a fuzzy hierarchical co-clustering algorithm for the collaborative filtering product recommendation. Via this method, users can be automatically grouped into different interest communities based on their behaviors. Then, a customized recommendation can be performed according to these implicitly detected relations. In summary, the developed search system performs much better in a visual unstructured product search when compared with state-of-art approaches. With the comprehensive ranking scheme and the collaborative filtering recommendation module, the user’s overhead in locating the information of value is reduced, and the user’s experience of seeking for useful product information is optimized
Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics
The purpose of this study is to provide a detailed performance comparison of
feature detector/descriptor methods, particularly when their various
combinations are used for image-matching. The localization experiments of a
mobile robot in an indoor environment are presented as a case study. In these
experiments, 3090 query images and 127 dataset images were used. This study
includes five methods for feature detectors (features from accelerated segment
test (FAST), oriented FAST and rotated binary robust independent elementary
features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant
feature transform (SIFT), and binary robust invariant scalable keypoints
(BRISK)) and five other methods for feature descriptors (BRIEF, BRISK, SIFT,
SURF, and ORB). These methods were used in 23 different combinations and it was
possible to obtain meaningful and consistent comparison results using the
performance criteria defined in this study. All of these methods were used
independently and separately from each other as either feature detector or
descriptor. The performance analysis shows the discriminative power of various
combinations of detector and descriptor methods. The analysis is completed
using five parameters: (i) accuracy, (ii) time, (iii) angle difference between
keypoints, (iv) number of correct matches, and (v) distance between correctly
matched keypoints. In a range of 60{\deg}, covering five rotational pose points
for our system, the FAST-SURF combination had the lowest distance and angle
difference values and the highest number of matched keypoints. SIFT-SURF was
the most accurate combination with a 98.41% correct classification rate. The
fastest algorithm was ORB-BRIEF, with a total running time of 21,303.30 s to
match 560 images captured during motion with 127 dataset images.Comment: 11 pages, 3 figures, 1 tabl
Real-time near replica detection over massive streams of shared photos
Aquest treball es basa en la detecció en temps real de repliques d'imatges en entorns distribuïts a partir de la indexació de vectors de caracterÃstiques locals
- …