We make two contributions in the field of AI fairness over continuous
protected attributes. First, we show that the Hirschfeld-Gebelein-Renyi (HGR)
indicator (the only one currently available for such a case) is valuable but
subject to a few crucial limitations regarding semantics, interpretability, and
robustness. Second, we introduce a family of indicators that are: 1)
complementary to HGR in terms of semantics; 2) fully interpretable and
transparent; 3) robust over finite samples; 4) configurable to suit specific
applications. Our approach also allows us to define fine-grained constraints to
permit certain types of dependence and forbid others selectively. By expanding
the available options for continuous protected attributes, our approach
represents a significant contribution to the area of fair artificial
intelligence.Comment: to be published in ICML2