Recently, numerous studies have demonstrated the presence of bias in machine
learning powered decision-making systems. Although most definitions of
algorithmic bias have solid mathematical foundations, the corresponding bias
detection techniques often lack statistical rigor, especially for non-iid data.
We fill this gap in the literature by presenting a rigorous non-parametric
testing procedure for bias according to Predictive Rate Parity, a commonly
considered notion of algorithmic bias. We adapt traditional asymptotic results
for non-parametric estimators to test for bias in the presence of dependence
commonly seen in user-level data generated by technology industry applications
and illustrate how these approaches can be leveraged for mitigation. We further
propose modifications of this methodology to address bias measured through
marginal outcome disparities in classification settings and extend notions of
predictive rate parity to multi-objective models. Experimental results on real
data show the efficacy of the proposed detection and mitigation methods