Coactive learning is an online problem solving setting where the solutions
provided by a solver are interactively improved by a domain expert, which in
turn drives learning. In this paper we extend the study of coactive learning to
problems where obtaining a globally optimal or near-optimal solution may be
intractable or where an expert can only be expected to make small, local
improvements to a candidate solution. The goal of learning in this new setting
is to minimize the cost as measured by the expert effort over time. We first
establish theoretical bounds on the average cost of the existing coactive
Perceptron algorithm. In addition, we consider new online algorithms that use
cost-sensitive and Passive-Aggressive (PA) updates, showing similar or improved
theoretical bounds. We provide an empirical evaluation of the learners in
various domains, which show that the Perceptron based algorithms are quite
effective and that unlike the case for online classification, the PA algorithms
do not yield significant performance gains.Comment: AAAI 2014 paper, including appendice