Automated systems based on artifcial intelligence (AI) increasingly support decisions with ethical implications where decision makers need to trust these systems. However, insights regarding trust in automated systems predominantly stem from
contexts where the main driver of trust is that systems produce accurate outputs (e.g., alarm systems for monitoring tasks).
It remains unclear whether what we know about trust in automated systems translates to application contexts where ethical
considerations (e.g., fairness) are crucial in trust development. In personnel selection, as a sample context where ethical
considerations are important, we investigate trust processes in light of a trust violation relating to unfair bias and a trust
repair intervention. Specifcally, participants evaluated preselection outcomes (i.e., sets of preselected applicants) by either
a human or an automated system across twelve selection tasks. We additionally varied information regarding imperfection
of the human and automated system. In task rounds fve through eight, the preselected applicants were predominantly male,
thus constituting a trust violation due to potential unfair bias. Before task round nine, participants received an excuse for the
biased preselection (i.e., a trust repair intervention). The results of the online study showed that participants have initially
less trust in automated systems. Furthermore, the trust violation and the trust repair intervention had weaker efects for the
automated system. Those efects were partly stronger when highlighting system imperfection. We conclude that insights
from classical areas of automation only partially translate to the many emerging application contexts of such systems where
ethical considerations are central to trust processes