125 research outputs found

    Three levels at which the user's cognition can be represented in artificial intelligence

    Get PDF
    Artificial intelligence (AI) plays an important role in modern society. AI applications are omnipresent and assist many decisions we make in daily life. A common and important feature of such AI applications are user models. These models allow an AI application to adapt to a specific user. Here, we argue that user models in AI can be optimized by modeling these user models more closely to models of human cognition. We identify three levels at which insights from human cognition can be—and have been—integrated in user models. Such integration can be very loose with user models only being inspired by general knowledge of human cognition or very tight with user models implementing specific cognitive processes. Using AI-based applications in the context of education as a case study, we demonstrate that user models that are more deeply rooted in models of cognition offer more valid and more fine-grained adaptations to an individual user. We propose that such user models can also advance the development of explainable AI

    Attention for future reward

    Get PDF
    When stimuli are consistently paired with reward, attention toward these stimuli becomes biased (e.g., Abrahamse, Braem, Notebaert & Verguts, et al., Psychological Bulletin 142:693–728, 2016, https://doi.org/10.1037/bul0000047). An important premise is that participants need to repeatedly experience stimulus–reward pairings to obtain these effects (e.g., Awh, Belopolsky & Theeuwes, Trends in Cognitive Sciences 16:437–443, 2012, https://doi.org/10.1016/j.tics.2012.06.010). This idea is based on associative learning theories (e.g., Pearce & Bouton, Annual Review of Psychology 52:111–139, 2001) that suggest that exposure to stimulus–reward pairings leads to the formation of stimulus–reward associations, and a transfer of salience of the reward to the neutral stimulus. However, novel learning theories (e.g., De Houwer, Learning and Motivation 53:7–23, 2009, https://doi.org/10.1016/j.lmot.2015.11.001) suggest such effects are not necessarily the result of associative learning, but can be caused by complex knowledge and expectancies as well. In the current experiment, we first instructed participants that a correct response to one centrally presented stimulus would be followed by a high reward, whereas a correct response to another centrally presented stimulus would be paired with a low reward. Before participants executed this task, they performed a visual probe task in which these stimuli were presented as distractors. We found that attention was drawn automatically toward high-reward stimuli relative to low-reward stimuli. This implies that complex inferences and expectancies can cause automatic attentional bias, challenging associative learning models of attentional control (Abrahamse et al., 2016; Awh et al., 2012)

    The effects of social presence on cooperative trust with algorithms

    Get PDF
    Algorithms support many processes in modern society. Research using trust games frequently reports that people are less inclined to cooperate when believed to play against an algorithm. Trust is, however, malleable by contextual factors and social presence can increase the willingness to collaborate. We investigated whether situating cooperation with an algorithm in the presence of another person increases cooperative trust. Three groups of participants played a trust game against a pre-programmed algorithm in an online webhosted experiment. The first group was told they played against another person who was present online. The second group was told they played against an algorithm. The third group was told they played against an algorithm while another person was present online. More cooperative responses were observed in the first group compared to the second group. A difference in cooperation that replicates previous findings. In addition, cooperative trust dropped more over the course of the trust game when participants interacted with an algorithm in the absence another person compared to the other two groups. This latter finding suggests that social presence can mitigate distrust in interacting with an algorithm. We discuss the cognitive mechanisms that can mediate this effect

    MO-ParamILS: A Multi-objective Automatic Algorithm Configuration Framework

    Get PDF
    International audienceAutomated algorithm configuration procedures play an increasingly important role in the development and application of algorithms for a wide range of computationally challenging problems. Until very recently, these configuration procedures were limited to optimising a single performance objective, such as the running time or solution quality achieved by the algorithm being configured. However, in many applications there is more than one performance objective of interest. This gives rise to the multi-objective automatic algorithm configuration problem, which involves finding a Pareto set of configurations of a given target algorithm that characterises trade-offs between multiple performance objectives. In this work, we introduce MO-ParamILS, a multi-objective extension of the state-of-the-art single-objective algorithm configuration framework ParamILS, and demonstrate that it produces good results on several challenging bi-objective algorithm configuration scenarios compared to a base-line obtained from using a state-of-the-art single-objective algorithm configurator

    Are Natural Faces Merely Labelled as Artificial Trusted Less?

    Get PDF
    Artificial intelligence increasingly plays a crucial role in daily life. At the same time, artificial intelligence is often met with reluctance and distrust. Previous research demonstrated that faces that are visibly artificial are considered to be less trustworthy and remembered less accurately compared to natural faces. Current technology, however, enables the generation of artificial faces that are indistinguishable from natural faces. In five experiments (total N = 867), we tested whether natural faces that are merely labelled to be artificial are also trusted less. A meta-analysis of all five experiments suggested that natural faces merely labeled as being artificial were judged to be less trustworthy. This bias did not depend on the degree of trustworthiness and attractiveness of the faces (Experiments 1-3). It was not modulated by changing raters' attitude towards artificial intelligence (Experiments 2-3) or by information communicated by the faces (Experiment 4). We also did not observe differences in recall performance between faces labelled as artificial or natural (Experiment 3). When participants only judged one type of face (i.e., either labelled as artificial or natural), the difference in trustworthiness judgments was eliminated (Experiment 5) suggesting that the contrast between the natural and artificial categories in the same task promoted the labelling effect. We conclude that faces that are merely labelled to be artificial are trusted less in situations that also include faces labelled to be real. We propose that understanding and changing social evaluations towards artificial intelligence goes beyond eliminating physical differences between artificial and natural entities

    Ressources cognitives et développement territorial : une analyse textuelle appliquée aux politiques locales de développement durable

    Get PDF
    International audienceThis paper focuses on how the development of a particular region consistently requires actors to share a certain level of cognitive resources. In the case studied here - the Nord Pas-de-Calais region, this cognitive proximity is built via the local policies involving sustainable development. In order to comprehend the way local communities of the region activate this resource, we used textual data treatment analyzing about thirty interviews. The results suggest that this cognitive proximity relies on two fundamentals elements: on the one hand, valuing the patrimonial infrastructures of the territory; and on the other hand, rebuilding the territorial identity of the region. Then the local policies lean both on the values that underlie these elements and on rhetorical modalities, to impulse in-depth changes that would have been more difficult to implement from the usual political levers.Cet article étudie comment un développement régional cohérent nécessite la mobilisation de ressources cognitives partagées. Dans le cas étudié - la région Nord Pas-de-Calais, cette proximité cognitive se construit via les politiques locales, sur la base du référentiel de développement durable. Pour saisir la manière dont les collectivités de la région activent cette ressource, nous avons mobilisé les outils d'analyse textuelle sur une trentaine d'entretiens auprès des acteurs publics du développement durabe. Les résultats mis en évidence suggèrent que cette proximité cognitive repose sur deux éléments fondamentaux : d'une part la mise en valeur d'un patrimoine infrastructurel territorialisé, et d'autre part la reconstruction d'une identité territoriale. Les politiques locales prennent ainsi appui sur les valeurs qui sous-tendent ces éléments ainsi que sur des modalités rhétoriques, pour impulser en profondeur une dynamique de changement, plus malaisée à mettre en œuvre partir des outils politiques habituels

    Faces Merely Labelled as Artificial are Trusted Less

    Get PDF
    Artificial intelligence plays a crucial role on our daily lives. At the same time, artificial intelligence is often met with reluctance and distrust. Previous research demonstrated that faces that are visibly artificial are considered to be less trustworthy and remembered less accurately compared to natural faces. Current technology, however, enables the generation of artificial faces that are indistinguishable from natural faces. Accordingly, we tested whether natural faces that are merely labelled to be artificial are also trusted less. In three experiments (N = 399), we observed that natural faces merely labeled as being artificial were judged to be less trustworthy. This bias was robust and did not depend on the degree of trustworthiness and attractiveness of the faces, nor could it be modulated by changing raters’ attitude towards artificial intelligence. At the same time, we did not observe differences in recall performance. We conclude that understanding and changing social evaluations towards artificial intelligence goes beyond eliminating physical differences between artificial and natural entities
    • …
    corecore