Artificial Intelligence (AI) is increasingly becoming a trusted advisor in
people's lives. A new concern arises if AI persuades people to break ethical
rules for profit. Employing a large-scale behavioural experiment (N = 1,572),
we test whether AI-generated advice can corrupt people. We further test whether
transparency about AI presence, a commonly proposed policy, mitigates potential
harm of AI-generated advice. Using the Natural Language Processing algorithm,
GPT-2, we generated honesty-promoting and dishonesty-promoting advice.
Participants read one type of advice before engaging in a task in which they
could lie for profit. Testing human behaviour in interaction with actual AI
outputs, we provide first behavioural insights into the role of AI as an
advisor. Results reveal that AI-generated advice corrupts people, even when
they know the source of the advice. In fact, AI's corrupting force is as strong
as humans'.Comment: Leib & K\"obis share first authorshi