25,303 research outputs found
The Flexible Group Spatial Keyword Query
We present a new class of service for location based social networks, called
the Flexible Group Spatial Keyword Query, which enables a group of users to
collectively find a point of interest (POI) that optimizes an aggregate cost
function combining both spatial distances and keyword similarities. In
addition, our query service allows users to consider the tradeoffs between
obtaining a sub-optimal solution for the entire group and obtaining an
optimimized solution but only for a subgroup.
We propose algorithms to process three variants of the query: (i) the group
nearest neighbor with keywords query, which finds a POI that optimizes the
aggregate cost function for the whole group of size n, (ii) the subgroup
nearest neighbor with keywords query, which finds the optimal subgroup and a
POI that optimizes the aggregate cost function for a given subgroup size m (m
<= n), and (iii) the multiple subgroup nearest neighbor with keywords query,
which finds optimal subgroups and corresponding POIs for each of the subgroup
sizes in the range [m, n]. We design query processing algorithms based on
branch-and-bound and best-first paradigms. Finally, we provide theoretical
bounds and conduct extensive experiments with two real datasets which verify
the effectiveness and efficiency of the proposed algorithms.Comment: 12 page
Design and analysis of algorithms for similarity search based on intrinsic dimension
One of the most fundamental operations employed in data mining tasks such as classification, cluster analysis, and anomaly detection, is that of similarity search. It has been used in numerous fields of application such as multimedia, information retrieval, recommender systems and pattern recognition. Specifically, a similarity query aims to retrieve from the database the most similar objects to a query object, where the underlying similarity measure is usually expressed as a distance function.
The cost of processing similarity queries has been typically assessed in terms of the representational dimension of the data involved, that is, the number of features used to represent individual data objects. It is generally the case that high representational dimension would result in a significant increase in the processing cost of similarity queries. This relation is often attributed to an amalgamation of phenomena, collectively referred to as the curse of dimensionality. However, the observed effects of dimensionality in practice may not be as severe as expected. This has led to the development of models quantifying the complexity of data in terms of some measure of the intrinsic dimensionality.
The generalized expansion dimension (GED) is one of such models, which estimates the intrinsic dimension in the vicinity of a query point q through the observation of the ranks and distances of pairs of neighbors with respect to q. This dissertation is mainly concerned with the design and analysis of search algorithms, based on the GED model. In particular, three variants of similarity search problem are considered, including adaptive similarity search, flexible aggregate similarity search, and subspace similarity search. The good practical performance of the proposed algorithms demonstrates the effectiveness of dimensionality-driven design of search algorithms
Measuring and Managing Answer Quality for Online Data-Intensive Services
Online data-intensive services parallelize query execution across distributed
software components. Interactive response time is a priority, so online query
executions return answers without waiting for slow running components to
finish. However, data from these slow components could lead to better answers.
We propose Ubora, an approach to measure the effect of slow running components
on the quality of answers. Ubora randomly samples online queries and executes
them twice. The first execution elides data from slow components and provides
fast online answers; the second execution waits for all components to complete.
Ubora uses memoization to speed up mature executions by replaying network
messages exchanged between components. Our systems-level implementation works
for a wide range of platforms, including Hadoop/Yarn, Apache Lucene, the
EasyRec Recommendation Engine, and the OpenEphyra question answering system.
Ubora computes answer quality much faster than competing approaches that do not
use memoization. With Ubora, we show that answer quality can and should be used
to guide online admission control. Our adaptive controller processed 37% more
queries than a competing controller guided by the rate of timeouts.Comment: Technical Repor
Distributed top-k aggregation queries at large
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network
Ranking Large Temporal Data
Ranking temporal data has not been studied until recently, even though
ranking is an important operator (being promoted as a firstclass citizen) in
database systems. However, only the instant top-k queries on temporal data were
studied in, where objects with the k highest scores at a query time instance t
are to be retrieved. The instant top-k definition clearly comes with
limitations (sensitive to outliers, difficult to choose a meaningful query time
t). A more flexible and general ranking operation is to rank objects based on
the aggregation of their scores in a query interval, which we dub the aggregate
top-k query on temporal data. For example, return the top-10 weather stations
having the highest average temperature from 10/01/2010 to 10/07/2010; find the
top-20 stocks having the largest total transaction volumes from 02/05/2011 to
02/07/2011. This work presents a comprehensive study to this problem by
designing both exact and approximate methods (with approximation quality
guarantees). We also provide theoretical analysis on the construction cost, the
index size, the update and the query costs of each approach. Extensive
experiments on large real datasets clearly demonstrate the efficiency, the
effectiveness, and the scalability of our methods compared to the baseline
methods.Comment: VLDB201
Toward Entity-Aware Search
As the Web has evolved into a data-rich repository, with the standard "page view," current search engines are becoming increasingly inadequate for a wide range of query tasks. While we often search for various data "entities" (e.g., phone number, paper PDF, date), today's engines only take us indirectly to pages. In my Ph.D. study, we focus on a novel type of Web search that is aware of data entities inside pages, a significant departure from traditional document retrieval. We study the various essential aspects of supporting entity-aware Web search. To begin with, we tackle the core challenge of ranking entities, by distilling its underlying conceptual model Impression Model and developing a probabilistic ranking framework, EntityRank, that is able to seamlessly integrate both local and global information in ranking. We also report a prototype system built to show the initial promise of the proposal. Then, we aim at distilling and abstracting the essential computation requirements of entity search. From the dual views of reasoning--entity as input and entity as output, we propose a dual-inversion framework, with two indexing and partition schemes, towards efficient and scalable query processing. Further, to recognize more entity instances, we study the problem of entity synonym discovery through mining query log data. The results we obtained so far have shown clear promise of entity-aware search, in its usefulness, effectiveness, efficiency and scalability
- …