Regulators of artificial intelligence (AI) emphasize the importance of human autonomy and oversight in AI-assisted decision-making (European Commission, Directorate-General for Communications Networks, Content and Technology, 2021; 117th Congress, 2022). Predictions are the foundation of all AI tools; thus, if AI can predict our decisions, how might these predictions influence our ultimate choices? We examine how salient, personalized AI predictions affect decision outcomes and investigate the role of reactance, i.e., an adverse reaction to a perceived reduction in individual freedom. We trained an AI tool on previous dictator game decisions to generate personalized predictions of dictators’ choices. In our AI treatment, dictators received this prediction before deciding. In a treatment involving human oversight, the decision of whether participants in our experiment were provided with the AI prediction was made by a previous participant (a ‘human overseer’). In the baseline, participants did not receive the prediction. We find that participants sent less to the recipient when they received a personalized prediction but the strongest reduction occurred when the AI’s prediction was intentionally not shared by the human overseer. Our findings underscore the importance of considering human reactions to AI predictions in assessing the accuracy and impact of these tools as well as the potential adverse effects of human oversight