34,334 research outputs found
Feedback Allocation For OFDMA Systems With Slow Frequency-domain Scheduling
We study the problem of allocating limited feedback resources across multiple
users in an orthogonal-frequency-division-multiple-access downlink system with
slow frequency-domain scheduling. Many flavors of slow frequency-domain
scheduling (e.g., persistent scheduling, semi-persistent scheduling), that
adapt user-sub-band assignments on a slower time-scale, are being considered in
standards such as 3GPP Long-Term Evolution. In this paper, we develop a
feedback allocation algorithm that operates in conjunction with any arbitrary
slow frequency-domain scheduler with the goal of improving the throughput of
the system. Given a user-sub-band assignment chosen by the scheduler, the
feedback allocation algorithm involves solving a weighted sum-rate maximization
at each (slow) scheduling instant. We first develop an optimal
dynamic-programming-based algorithm to solve the feedback allocation problem
with pseudo-polynomial complexity in the number of users and in the total
feedback bit budget. We then propose two approximation algorithms with
complexity further reduced, for scenarios where the problem exhibits additional
structure.Comment: Accepted to IEEE Transactions on Signal Processin
Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization
In this paper, we study a class of stochastic optimization problems, referred
to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of
\min_{x \in \mathcal{X}}
\EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big), which finds a wide
spectrum of applications including portfolio selection, reinforcement learning,
robust learning, causal inference and so on. Assuming availability of samples
from the distribution \PP(\xi) and samples from the conditional distribution
\PP(\eta|\xi), we establish the sample complexity of the sample average
approximation (SAA) for CSO, under a variety of structural assumptions, such as
Lipschitz continuity, smoothness, and error bound conditions. We show that the
total sample complexity improves from \cO(d/\eps^4) to \cO(d/\eps^3) when
assuming smoothness of the outer function, and further to \cO(1/\eps^2) when
the empirical function satisfies the quadratic growth condition. We also
establish the sample complexity of a modified SAA, when and are
independent. Several numerical experiments further support our theoretical
findings.
Keywords: stochastic optimization, sample average approximation, large
deviations theoryComment: Typo corrected. Reference added. Revision comments handle
On the Shapley value and its application to the Italian VQR research assessment exercise
Research assessment exercises have now become common evaluation tools in a number of countries. These exercises have the goal of guiding merit-based public funds allocation, stimulating improvement of research productivity through competition and assessing the impact of adopted research support policies. One case in point is Italy's most recent research assessment effort, VQR 2011–2014 (Research Quality Evaluation), which, in addition to research institutions, also evaluated university departments, and individuals in some cases (i.e., recently hired research staff and members of PhD committees). However, the way an institution's score was divided, according to VQR rules, between its constituent departments or its staff members does not enjoy many desirable properties well known from coalitional game theory (e.g., budget balance, fairness, marginality). We propose, instead, an alternative score division rule that is based on the notion of Shapley value, a well known solution concept in coalitional game theory, which enjoys the desirable properties mentioned above. For a significant test case (namely, Sapienza University of Rome, the largest university in Italy), we present a detailed comparison of the scores obtained, for substructures and individuals, by applying the official VQR rules, with those resulting from Shapley value computations. We show that there are significant differences in the resulting scores, making room for improvements in the allocation rules used in research assessment exercises
The Core of the Participatory Budgeting Problem
In participatory budgeting, communities collectively decide on the allocation
of public tax dollars for local public projects. In this work, we consider the
question of fairly aggregating the preferences of community members to
determine an allocation of funds to projects. This problem is different from
standard fair resource allocation because of public goods: The allocated goods
benefit all users simultaneously. Fairness is crucial in participatory decision
making, since generating equitable outcomes is an important goal of these
processes. We argue that the classic game theoretic notion of core captures
fairness in the setting. To compute the core, we first develop a novel
characterization of a public goods market equilibrium called the Lindahl
equilibrium, which is always a core solution. We then provide the first (to our
knowledge) polynomial time algorithm for computing such an equilibrium for a
broad set of utility functions; our algorithm also generalizes (in a
non-trivial way) the well-known concept of proportional fairness. We use our
theoretical insights to perform experiments on real participatory budgeting
voting data. We empirically show that the core can be efficiently computed for
utility functions that naturally model our practical setting, and examine the
relation of the core with the familiar welfare objective. Finally, we address
concerns of incentives and mechanism design by developing a randomized
approximately dominant-strategy truthful mechanism building on the exponential
mechanism from differential privacy
- …