8 research outputs found
Online pricing for multi-type of Items
LNCS v. 7285 entitled: Frontiers in algorithmics and algorithmic aspects in information and management: joint international conference, FAW-AAIM 2012 ... proceedingsIn this paper, we study the problem of online pricing for bundles of items. Given a seller with k types of items, m of each, a sequence of users {u 1, u 2, ...} arrives one by one. Each user is single-minded, i.e., each user is interested only in a particular bundle of items. The seller must set the price and assign some amount of bundles to each user upon his/her arrival. Bundles can be sold fractionally. Each u i has his/her value function v i (·) such that v i (x) is the highest unit price u i is willing to pay for x bundles. The objective is to maximize the revenue of the seller by setting the price and amount of bundles for each user. In this paper, we first show that the lower bound of the competitive ratio for this problem is Ω(logh + logk), where h is the highest unit price to be paid among all users. We then give a deterministic online algorithm, Pricing, whose competitive ratio is O (√k·log h log k). When k = 1 the lower and upper bounds asymptotically match the optimal result O(logh). © 2012 Springer-Verlag.postprin
On Revenue Maximization with Sharp Multi-Unit Demands
We consider markets consisting of a set of indivisible items, and buyers that
have {\em sharp} multi-unit demand. This means that each buyer wants a
specific number of items; a bundle of size less than has no value,
while a bundle of size greater than is worth no more than the most valued
items (valuations being additive). We consider the objective of setting
prices and allocations in order to maximize the total revenue of the market
maker. The pricing problem with sharp multi-unit demand buyers has a number of
properties that the unit-demand model does not possess, and is an important
question in algorithmic pricing. We consider the problem of computing a revenue
maximizing solution for two solution concepts: competitive equilibrium and
envy-free pricing.
For unrestricted valuations, these problems are NP-complete; we focus on a
realistic special case of "correlated values" where each buyer has a
valuation v_i\qual_j for item , where and \qual_j are positive
quantities associated with buyer and item respectively. We present a
polynomial time algorithm to solve the revenue-maximizing competitive
equilibrium problem. For envy-free pricing, if the demand of each buyer is
bounded by a constant, a revenue maximizing solution can be found efficiently;
the general demand case is shown to be NP-hard.Comment: page2
On the theory of truthful and fair pricing for banner advertisements
We consider revenue maximization problem in banner advertisements under two fundamental concepts: Envy-freeness and truthfulness. Envy-freeness captures fairness requirement among buyers while truthfulness gives buyers the incentive to announce truthful private bids. A extension of envy-freeness named competitive equilibrium, which requires both envy-freeness and market clearance conditions, is also investigated. For truthfulness also called incentive compatible, we adapt Bayesian settings, where each buyer's private value is drawn independently from publicly known distributions. Therefore, the truthfulness we adopt is Bayesian incentive compatible mechanisms. Most of our results are positive. We study various settings of revenue maximizing problem e.g. competitive equilibrium and envy-free solution in relaxed demand, sharp demand and consecutive demand case; Bayesian incentive compatible mechanism in relaxed demand, sharp demand, budget constraints and consecutive demand cases. Our approach allows us to argue that these simple mechanisms give optimal or approximate-optimal revenue guarantee in a very robust manner
Assignment problems and their application in economics
Four assignment problems are introduced in this thesis, and they are approached
based on the context they are presented in. The underlying graphs of
the assignment problems in this thesis are in most cases bipartite graphs with two
sets of vertices corresponding to the agents and the resources. An edge might show
the interest of an agent in a resource or willingness of a manufacturer to produce
the corresponding product of a market, to name a few examples.
The rst problem studied in this thesis is a two-stage stochastic matching
problem in both online and oine versions. In this work, which is presented in
Chapter 2 of this thesis, a coordinator tries to benet by having access to the
statistics of the future price discounts which can be completely unpredictable for
individual customers. In our model, individual risk-averse customers want to book
hotel rooms for their future vacation; however, they are unwilling to leave booking to
the last minute which might result in huge savings for them since they have to take
the risk of all the hotel rooms being sold out. Instead of taking this risk, individual
customers make contracts with a coordinator who can spread the risk over many
such cases and also has more information on the probability distribution of the future
prices. In the rst stage, the coordinator agrees to serve some buyers, and then in
the second stage, once the nal prices have been revealed, he books rooms for them
just as he promised. An agreement between the coordinator and each buyer consists
of a set of acceptable hotels for the customer and a single price. Two models for
this problem are investigated. In the rst model, the details of the agreements are
proposed by the buyer, and we propose a bicriteria-style approximation algorithm
that gives a constant-factor approximation to the objective function by allowing a
bounded fraction of our hotel bookings to overlap. In the second model, the details
of the agreements are proposed by the coordinator, and we show the prices yielding
the optimal prot up to a small additive loss can be found by a polynomial time
algorithm.
In the third chapter of this thesis, two versions of the online matching problem
are analyzed with a similar technique. Online matching problems have been studied
by many researchers recently due to their direct application in online advertisement
systems such as Google Adwords. In the online bipartite matching problem, the
vertices of one side are known in advance; however, the vertices of the other side
arrive one by one, and reveal their adjacent vertices on the oine side only upon
arrival. Each vertex can only be matched to an unmatched vertex once it arrives and
we cannot match or rematch the online vertex in the future. In the online matching
problem with free disposal, we have the option to rematch an already matched oine
vertex only if we eliminate its previous online match from the graph. The goal is to
maximize the expected size of the matching. We propose a randomized algorithm
that achieves a ratio greater than 0:5 if the online nodes have bounded degree. The
other problem studied in the third chapter is the edge-weighted oblivious matching in
which the weights of all the edges in the underlying graph are known but existence
of each edge is only revealed upon probing that edge. The weighted version of
the problem has applications in pay-per-click online advertisements, in which the
revenue for a click on a particular ad is known, but it is unknown whether the user
will actually click on that ad. Using a similar technique, we develop an algorithm
with approximation factor greater than 0:5 for this problem too.
In Chapter 4, a generalized version of the Cournot Competition (a foundational
model in economics) is studied. In the traditional version, rms are competing in
a single market with one heterogeneous good, and their strategy is the quantity
of good they produce. The price of the good is an inverse function of the total
quantity produced in the market, and the cost of production for each rm in each
market increases with the quantity it produces. We study Cournot Competition on
a bipartite network of rms and markets. The edges in this network demonstrate
access of a rm to a market. The price of the good in each market is again an
inverse function of the quantity of the good produced by the rms, and the cost of
production for each rm is a function of its production in dierent markets. Our
goal is to give polynomial time algorithms to nd the quantity produced by rms
in each market at the equilibrium for generalized cost and price functions.
The nal chapter of this thesis is on analyzing a problem faced by online
marketplaces such as Amazon and ebay which deal with huge datasets registering
transaction of merchandises between many buyers and sellers. As the size of datasets
grow, it is important that the algorithms become more selective in the amount of
data they store. Our goal is to develop pricing algorithms for social welfare (or
revenue) maximization that are appropriate for use with the massive datasets in
these networks. We specially focus on the streaming setting, the common model
for big data analysis. Furthermore, we include hardness results (lower bounds)
on the minimum amount of memory needed to calculate the exact prices and also
present algorithms which are more space ecient than the given lower bounds but
approximate the optimum prices for the goods besides the revenue or the social
welfare of the mechanism
ALLOCATIONS IN LARGE MARKETS
Rapid growth and popularity of internet based services such as online markets and online advertisement systems provide a lot of new algorithmic challenges. One of the main challenges is the limited access to the input. There are two main reasons that algorithms have limited data accessibility.
1) The input is extremely large, and hence having access to the whole data at once is not practical.
2) The nature of the system forces us to make decisions before observing the whole input.
Internet-enabled marketplaces such as Amazon and eBay deal with huge datasets registering transaction of merchandises between lots of buyers and sellers. It is important that algorithms become more time and space efficient as the size of datasets increase. An algorithm that runs in polynomial time may not have a reasonable running time for such large datasets. In the first part of this dissertation, we study the development of allocation algorithms that are appropriate for use with massive datasets. We especially focus on the streaming setting which is a common model for big data analysis. In the graph streaming, the algorithm has access to a sequence of edges, called a stream. The algorithm reads edges in the order in which they appear in the stream. The goal is to design an algorithm that maintains a large allocation, using as little space as possible. We achieve our results by developing powerful sampling techniques. Indeed, one can implement our sampling techniques in the streaming setting as well as other distributed settings such as MapReduce.
Giant online advertisement markets such as Google, Bing and Facebook raised up several interesting allocation problems. Usually, in these applications, we need to make the decision before obtaining the full information of the input graph. This enforces an uncertainty on our belief about the input, and thus makes the classical algorithms inapplicable. To address this shortcoming online algorithms have been developed. In online algorithms again the input is a sequence of items. Here the algorithm needs to make an irrevocable decision upon arrival of each item. In the second part of this dissertation, we aim to achieve two main goals for each allocation problem in the market. Our first goal is to design models to capture the uncertainty of the input based on the properties of problems and the accessible data in real applications. Our second goal is to design algorithms and develop new techniques for these market allocation problems