We consider the problem of fitting a linear model to data held by individuals
who are concerned about their privacy. Incentivizing most players to truthfully
report their data to the analyst constrains our design to mechanisms that
provide a privacy guarantee to the participants; we use differential privacy to
model individuals' privacy losses. This immediately poses a problem, as
differentially private computation of a linear model necessarily produces a
biased estimation, and existing approaches to design mechanisms to elicit data
from privacy-sensitive individuals do not generalize well to biased estimators.
We overcome this challenge through an appropriate design of the computation and
payment scheme.Comment: To appear in Proceedings of the 28th Annual Conference on Learning
Theory (COLT 2015