The need for fair and non-toxic generative artificial intelligence (GenAI) models is reflected in global regulation changes, algorithmic developments, debiasing techniques and prompt engineering. This paper aims to highlight the inconsistencies in GenAI text outputs and focus on template-based prompts as an example to provide evidence that prompt design choices also influence a non-toxic output. We utilise occupation and respect-related prompt templates with past and present tenses to develop prompts for a multicultural society. We analyse the text outputs from the curated prompts of several GenAI models, averaging across all demographic groups, to show that even changes in past vs present tense can result in toxic outputs. The next stage of this research is focusing on the impact of demographic groups on harmful outputs
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.