Despite an abundance of fairness-aware machine learning (fair-ml) algorithms,
the moral justification of how these algorithms enforce fairness metrics is
largely unexplored. The goal of this paper is to elicit the moral implications
of a fair-ml algorithm. To this end, we first consider the moral justification
of the fairness metrics for which the algorithm optimizes. We present an
extension of previous work to arrive at three propositions that can justify the
fairness metrics. Different from previous work, our extension highlights that
the consequences of predicted outcomes are important for judging fairness. We
draw from the extended framework and empirical ethics to identify moral
implications of the fair-ml algorithm. We focus on the two optimization
strategies inherent to the algorithm: group-specific decision thresholds and
randomized decision thresholds. We argue that the justification of the
algorithm can differ depending on one's assumptions about the (social) context
in which the algorithm is applied - even if the associated fairness metric is
the same. Finally, we sketch paths for future work towards a more complete
evaluation of fair-ml algorithms, beyond their direct optimization objectives