3 research outputs found
How to Price Shared Optimizations in the Cloud
Data-management-as-a-service systems are increasingly being used in
collaborative settings, where multiple users access common datasets. Cloud
providers have the choice to implement various optimizations, such as indexing
or materialized views, to accelerate queries over these datasets. Each
optimization carries a cost and may benefit multiple users. This creates a
major challenge: how to select which optimizations to perform and how to share
their cost among users. The problem is especially challenging when users are
selfish and will only report their true values for different optimizations if
doing so maximizes their utility. In this paper, we present a new approach for
selecting and pricing shared optimizations by using Mechanism Design. We first
show how to apply the Shapley Value Mechanism to the simple case of selecting
and pricing additive optimizations, assuming an offline game where all users
access the service for the same time-period. Second, we extend the approach to
online scenarios where users come and go. Finally, we consider the case of
substitutive optimizations. We show analytically that our mechanisms induce
truth- fulness and recover the optimization costs. We also show experimentally
that our mechanisms yield higher utility than the state-of-the-art approach
based on regret accumulation.Comment: VLDB201
Mechanism Design Approach for Energy Efficiency
In this work we deploy a mechanism design approach for allocating a divisible
commodity (electricity in our example) among consumers. We consider each
consumer with an associated personal valuation function of the energy resource
during a certain time interval. We aim to select the optimal consumption
profile for every user avoiding consumption peaks when the total required
energy could exceed the energy production. The mechanism will be able to drive
users in shifting energy consumptions in different hours of the day. We start
by presenting a very basic Vickrey-Clarke-Groves mechanism, we discuss its
weakness and propose several more complex variants.Comment: Techical repor
Recommended from our members
Recommender systems and market approaches for industrial data management
Industrial companies are dealing with an increasing data overload problem in all
aspects of their business: vast amounts of data are generated in and outside each
company. Determining which data is relevant and how to get it to the right users is
becoming increasingly difficult. There are a large number of datasets to be
considered, and an even higher number of combinations of datasets that each user
could be using.
Current techniques to address this data overload problem necessitate detailed
analysis. These techniques have limited scalability due to their manual effort and
their complexity, which makes them unpractical for a large number of datasets.
Search, the alternative used by many users, is limited by the user’s knowledge
about the available data and does not consider the relevance or costs of providing
these datasets.
Recommender systems and so-called market approaches have previously been
used to solve this type of resource allocation problem, as shown for example in
allocation of equipment for production processes in manufacturing or for spare part
supplier selection. They can therefore also be seen as a potential application for
the problem of data overload.
This thesis introduces the so-called RecorDa approach: an architecture using
market approaches and recommender systems on their own or by combining them
into one system. Its purpose is to identify which data is more relevant for a user’s
decision and improve allocation of relevant data to users.
Using a combination of case studies and experiments, this thesis develops and
tests the approach. It further compares RecorDa to search and other mechanisms.
The results indicate that RecorDa can provide significant benefit to users with
easier and more flexible access to relevant datasets compared to other
techniques, such as search in these databases. It is able to provide a fast increase
in precision and recall of relevant datasets while still keeping high novelty and
coverage of a large variety of datasets