Location of Repository

## Learning Policies for Contextual Submodular Prediction- Supplementary Material

### Abstract

This appendix contains the proofs of the various theoretical results presented in this paper. A.1. Preliminaries We begin by proving a number of lemmas about monotone submodular functions, which will be useful to prove our main results. Lemma 1. Let S be a set and f be a monotone submodular function defined on list of items from S. For any lists A, B, we have that: f(A ⊕ B) − f(A) ≤ |B|(E s∼U(B)[f(A ⊕ s)] − f(A)) for U(B) the uniform distribution on items in B. Proof. For any list A and B, let Bi denote the list of the first i items in B, and bi the i th item in B. We have that: f(A ⊕ B) − f(A) = ∑ |B| i=1 f(A ⊕ Bi) − f(A ⊕ Bi−1) ∑ |B| i=1 f(A ⊕ bi) − f(A) = |B|(Eb∼U(B)[f(A ⊕ b)] − f(A)) where the inequality follows from the submodularity property of f. Lemma 2. Let S be a set, and f a monotone submodular function defined on lists of items in S. Let A, B be any lists of items from S. Denote Aj the list of the first j items in A, U(B) the uniform distribution on items in B and define ɛj = Es∼U(B)[f(Aj−1 ⊕ s)] − f(Aj), the additive error term in competing with the average marginal benefits of the items in B when picking the jth item in A (which could be positive or negative)

Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.9674
Provided by: CiteSeerX