2 research outputs found

    Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

    Full text link
    Artificial intelligence's progress holds great promise in assisting society in addressing pressing societal issues. In particular Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems allowing them to process an unprecedented amount of unstructured data. The consequent hype has also backfired, raising negative sentiment even after novel AI methods' surprising contributions. One of the causes, but also an important issue per se, is the rising and misleading feeling of being able to access and process any form of knowledge to solve problems in any domain with no effort or previous expertise in AI or problem domain, disregarding current LLMs limits, such as hallucinations and reasoning limits. Acknowledging AI fallibility is crucial to address the impact of dogmatic overconfidence in possibly erroneous suggestions generated by LLMs. At the same time, it can reduce fear and other negative attitudes toward AI. AI literacy interventions are necessary that allow the public to understand such LLM limits and learn how to use them in a more effective manner, i.e. learning to "prompt". With this aim, a pilot educational intervention was performed in a high school with 30 students. It involved (i) presenting high-level concepts about intelligence, AI, and LLM, (ii) an initial naive practice with ChatGPT in a non-trivial task, and finally (iii) applying currently-accepted prompting strategies. Encouraging preliminary results have been collected such as students reporting a) high appreciation of the activity, b) improved quality of the interaction with the LLM during the educational activity, c) decreased negative sentiments toward AI, d) increased understanding of limitations and specifically We aim to study factors that impact AI acceptance and to refine and repeat this activity in more controlled settings.Comment: Submitted to AIXIA 2023 22nd International Conference of the Italian Association for Artificial Intelligence 6 - 9 Nov, 2023, Rome, Ital

    Investigating complex dynamics of inclusion and exclusion in the Cyberball game: a trial-by-trial computational approach

    No full text
    Objective. While the detrimental outcomes experienced by the victims of unambiguous ostracism have been extensively investigated, the reactions to asymmetrical behavioural repertoires of social inclusion and exclusion are not fully clarified yet. Here, we evaluated how individuals react to others in a Cyberball experiment based on the perception of others’ differential inclusionary behaviour toward them and whether this response pattern varied as a function of their prosocial attitudes. Method. We adopted two Cyberball conditions toward this aim: partial ostracism and partial over-inclusion. In these asymmetrical conditions, the behaviour of one of the co-players is programmed to interact inclusively in a balanced manner, while the second co-player is programmed to either exclude (i.e., almost never tossing the ball) or over-include (i.e., almost always tossing the ball) the participant in different task blocks. Results. Across two studies (n=45 and n=106), participants correctly perceived the degree of inclusionary behaviour conveyed by different co-players in each condition and reacted emotionally accordingly. Trial-by-trial analysis revealed that in partial ostracism prosocial participants reciprocated more the inclusive co-player, thereby excluding the ostracising co-player, while individualistic participants did not. Conversely, in the partial over-inclusion condition the participants, irrespective of one’s own prosocial attitudes, did not choose to ostracise a normally including player who maintained a balanced engagement in the game, over a player who displayed an over-inclusive attitude towards the participant. Conclusions. Taken together, our results suggest that prosocial individuals selectively tend to exclude ostracizing –but not fairly including– interaction partners, likely in the attempt to restore the “norm” of social inclusion
    corecore