8,833 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The Effect of Celebrity Gaze-Cueing on Binary Choice Decision Making

    Get PDF
    Marketers have long used celebrities in advertisements to help viewers build strong brand and product associations; however, it is not well understood how the celebrity and the visual context affect visual attention to and, ultimately, consumer decision making for the endorsed product. Most prior studies have focused on qualitative surveys about brand equity, memory of the advertisement, and self-reported interest and intent to purchase. My study uses new methods from applied neuroscience that allow me both to directly measure and to analyze how celebrities in static advertisements impact consumer decisions that do not require measures of verbal self-report. Furthermore, research has shown that humans automatically divert their visual attention in the direction of another’s gaze, known as “gaze-cueing” or “gaze-following” (Friesen and Kingstone, 1998; Kuhn and Kingstone, 2009). An overwhelming majority of celebrity endorsed advertisements depict celebrities looking at the viewer, not toward the endorsed product, though academic research suggests that gaze-cueing at the product (instead of toward the viewer) increases visual attention toward the endorsed product (Hutton and Nolte, 2011). My project tests whether the increase in visual attention due to gaze-cueing at the product translates into an increase in the consumer’s subjective value of that product and consequently influences product choice. Results indicate that celebrity interventions in advertisements increased the subjective value of endorsed products yet, interestingly, did not drive more overt visual attention to them. Moreover, gaze-cueing was found to have a pronounced effect on guiding visual attention. These advertising cues impact choice, which could translate into larger profits for competitive consumer products

    A Design Thinking Framework for Human-Centric Explainable Artificial Intelligence in Time-Critical Systems

    Get PDF
    Artificial Intelligence (AI) has seen a surge in popularity as increased computing power has made it more viable and useful. The increasing complexity of AI, however, leads to can lead to difficulty in understanding or interpreting the results of AI procedures, which can then lead to incorrect predictions, classifications, or analysis of outcomes. The result of these problems can be over-reliance on AI, under-reliance on AI, or simply confusion as to what the results mean. Additionally, the complexity of AI models can obscure the algorithmic, data and design biases to which all models are subject, which may exacerbate negative outcomes, particularly with respect to minority populations. Explainable AI (XAI) aims to mitigate these problems by providing information on the intent, performance, and reasoning process of the AI. Where time or cognitive resources are limited, the burden of additional information can negatively impact performance. Ensuring XAI information is intuitive and relevant allows the user to quickly calibrate their trust in the AI, in turn improving trust in suggested task alternatives, reducing workload and improving task performance. This study details a structured approach to the development of XAI in time-critical systems based on a design thinking framework that preserves the agile, fast-iterative approach characteristic of design thinking and augments it with practical tools and guides. The framework establishes a focus on shared situational perspective, and the deep understanding of both users and the AI in the empathy phase, provides a model with seven XAI levels and corresponding solution themes, and defines objective, physiological metrics for concurrent assessment of trust and workload

    Does insecure attachment lead to (mis)wired brains? Emotion, cognition, and attachment: an outlook through psychophysiological pathways

    Get PDF
    2346, 2360, 2560The evolutionary-based attachment theory (Bowlby, 1969, 1973, 1980) asserts that approach/attachment or avoidance/withdrawal tendencies may reflect distinct regulation strategies underlying individual differences in attachment styles. The influence of the internal working models of attachment on emotion and cognition, and more recently, on its psychophysiological underpinnings has been a central focus of research. Despite the endeavours at clarifying this modulatory influence in behaviour, inconsistent results have prevented definite answers. Aiming at contributing to the current knowledge in the filed, and embedded in a psychophysiological framework, the present thesis brings together findings of empirical studies focusing on the regulation abilities in attentional bias towards emotion information. Following an integrative approach, these studies coupled behavioural responses with measures of skin conductance, heart rate, and eye movements. Findings of these studies converge to show distinctive features between regulation strategies deployed by insecure attached individuals when processing threat-related information on visual attention tasks, as measured by behavioural (Study I), sympathetic (Study II), and eye movement (Study III) responses. Taken together these findings point up the evolutionary value of the attachment behavioural system, providing support for fundamental distinctions between insecure attachment styles, both at a behavioural and physiological level. Considering recent advances emerging in the filed, results are discussed within in a comprehensive and all-encompassing approach.Fundamentada num cenário evolucionista, a teoria da vinculação (Bowlby, 1969, 1973, 1980) considera que comportamentos de aproximação/evitamento reflectem estratégias de regulação subjacentes a diferenças individuais nos estilos de vinculação. Neste âmbito, a natureza dos modelos internos dinâmicos têm sido um foco central na investigação, tendo sido dada particular atenção à sua influência nos processos emocionais e cognitivos e, mais recentemente, às suas bases psicofisiológicas. Contudo, apesar de vários estudos terem examinado estas questões, a ausência de dados consistentes acerca dos mecanismos que poderão contribuir para esta influência estão ainda por conhecer de modo consistente. Visando contribuir para o conhecimento neste campo, a presente tese reúne um conjunto de estudos empíricos que, numa perspectiva psicofisiológica, focam a acção das estratégias de regulação associadas aos estilos de vinculação insegura – ansiosa e evitante –, nos enviesamentos atencionais no processamento de informação emocional. Numa abordagem integrativa, estes estudos combinam respostas comportamentais com medidas fisiológicas: condutância da pele; frequência cardíaca; e movimentos oculares. Utilizando tarefas de atenção visual, os resultados destes estudos apoiam a hipótese de que os estilos de vinculação insegura estão relacionados com estratégias de regulação específicas no processamento de estímulos potencialmente ameaçadores, avaliadas através de respostas comportamentais (Estudo I), do sistema nervoso simpático (Estudo II), e dos movimentos oculares (Estudo III). Globalmente, os resultados corroboraram o valor evolutivo do sistema comportamental de vinculação, dando suporte para diferenças entre os estilos de vinculação insegura, tanto a nível comportamental como fisiológico. Considerando progressos científicos emergentes, os resultados são discutidos numa abordagem compreensiva e abrangente

    Investigation of Mirror Image Bias: Evidence For the Use of Psychophysiological Measures as Indicators of Cognitive Heuristics

    Get PDF
    The Mirror Imaging Bias (MIB) is gaining attention as a prominent quality factor in analysts\u27 performance. MIB is an irrationality in which analysts perceive and process information through the filter of personal experience. As evidenced by notable historical events, the consequences of this bias can be dramatic. A way to understand MIB in humans is sought. How analysts analyze data, are trained, and interact with biases is explored. An experiment testing for the appearance of MIB was designed and completed. Measures from an eye tracker as well as physiological measures were collected. Results show a significant correlation between pupil diameter and the appearance of MIB. There is a significant correlation between response time as well as the number of fixations and the viewpoint of the question. These results support that MIB is used as a shortcut to minimize mental workload in decision making in uncertain situations

    The social brain: neural basis of social knowledge

    Get PDF
    Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people—their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience

    Mapping the development of visual information use for facial expression recognition

    Get PDF
    Dans cette thèse, je souhaitais cartographier le développement de la reconnaissance des expressions faciales de la petite enfance à l'âge adulte en identifiant, et ceci pour la première fois dans la littérature développementale, la quantité et la qualité d’informations visuelles nécessaires pour reconnaître les six émotions « de base ». En utilisant des mesures comportementales et oculaires, les contributions originales de cette thèse incluent: 1. Une cartographie fine et impartiale du développement continu de la reconnaissance des six expressions faciales de base avec l'introduction d'une mesure psychophysique de pointe; 2. L'identification de deux phases principales dans le développement de la reconnaissance des expressions faciales, allant de 5 à 12 ans et de 13 à l'âge adulte; 3. Une évaluation fine de la quantité d'informations (signal) et d'intensité nécessaires pour reconnaître les six émotions fondamentales du développement ; 4. Le traitement des informations relatives au signal et à l'intensité devient plus discriminant au cours du développement, car avec l'âge, moins d'informations sont nécessaires pour reconnaître la colère, le dégoût, la surprise et la tristesse. 5. Une nouvelle analyse des profils de réponse (la séquence de réponses entre les essais) a révélé des changements subtils mais importants dans la séquence de réponses sur un continuum d'âge: les profils deviennent plus similaires avec l'âge en raison de catégorisations erronées moins aléatoires; 6. La comparaison de deux mesures de reconnaissance au sein de la même cohorte, révélant que deux types de stimuli couramment utilisés dans les études sur les expressions émotionnelles (expressions à intensité maximale vs expressions d'intensités variables) ne peuvent pas être directement comparés au cours du développement; 7. De nouvelles analyses des mouvements oculaires ont révélé l'âge auquel les stratégies perceptuelles pour la reconnaissance d'expressions faciales émotionnelles deviennent matures. Une première revue de la littérature a révélé plusieurs domaines moins étudiés du développement de la reconnaissance de l'expression faciale, sur lesquels j'ai choisi de me concentrer pour ma thèse. Tout d'abord, au début de cette thèse, aucune étude n'a été menée sur le développement continu de la reconnaissance des expressions faciales depuis la petite enfance jusqu'à l'âge adulte. De même, aucune étude n’a examiné les six expressions dites «de base» et une expression neutre dans le même paradigme. Par conséquent, l’objectif de la première étude était de fournir une cartographie fine du développement continu des six expressions de base et neutre de l’âge de 5 ans à l’âge adulte en introduisant une nouvelle méthode psychophysique dans la littérature sur le développement. La procédure psychophysique adaptatived a fourni une mesure précise de la performance de reconnaissance à travers le développement. En utilisant une régression linéaire, nous avons ensuite tracé les trajectoires de développement pour la reconnaissance de chacune des 6 émotions de base et neutres. Cette cartographie de la reconnaissance à travers le développement a révélé des expressions qui montraient une nette amélioration avec l'âge - dégoût, neutre et colère; des expressions qui montrent une amélioration graduelle avec l’âge - tristesse, surprise; et celles qui sont restés stables depuis leur plus tendre enfance - la joie et la peur; indiquant que le codage de ces expressions est déjà mature à 5 ans. Deux phases principales ont été identifiées dans le développement de la reconnaissance des expressions faciales, car les seuils de reconnaissance étaient les plus similaires entre les âges de 5 à 12 ans et de 13 ans jusqu'à l'âge adulte. Dans la deuxième étude, nous voulions approfondir cette cartographie fine du développement de la reconnaissance des expressions faciales en quantifiant la quantité d'informations visuelles nécessaires pour reconnaître une expression au cours du développement en comparant deux mesures d'informations visuelles, le signal et l'intensité. Encore une fois, en utilisant une approche psychophysique, cette fois avec un plan de mesures répétées, la quantité de signal et l'intensité nécessaires pour reconnaître les expressions de tristesse, colère, dégoût et surprise ont diminué avec l'âge. Par conséquent, le traitement des deux types d’informations visuelles devient plus discriminant au cours du développement car moins d’informations sont nécessaires avec l’âge pour reconnaître ces expressions. L'analyse mutuelle des informations a révélé que l'intensité et le traitement du signal ne sont similaires qu'à l'âge adulte et que, par conséquent, les expressions à intensité maximale (dans la condition du signal) et les expressions d'intensité variable (dans la condition d'intensité) ne peuvent être comparées directement pendant le développement. Alors que les deux premières études de cette thèse traitaient de la quantité d'informations visuelles nécessaires pour reconnaître une expression tout au long du développement, le but de la troisième étude était de déterminer quelle information est utilisée dans le développement pour reconnaître une expression utilisant l'eye-tracking. Nous avons enregistré les mouvements oculaires d’enfants âgés de 5 ans à l'âge adulte lors de la reconnaissance des six émotions de base en utilisant des conditions de vision naturelles et des conditions contingentes du regard. L'analyse statistique multivariée des données sur les mouvements oculaires au cours du développement a révélé l'âge auquel les stratégies perceptuelles pour la reconnaissance des expressions faciales des émotions deviennent matures. Les stratégies de mouvement oculaire du groupe d'adolescents les plus âgés, 17 à 18 ans, étaient les plus similaires aux adultes, quelle que soit leur expression. Une dépression dans le développement de la similarité stratégique avec les adultes a été trouvé pour chaque expression émotionnelle entre 11 et 14 ans et légèrement avant, entre 7 et 8 ans, pour la joie. Enfin, la précision de la reconnaissance des expressions de joie, colère et tristesse ne diffère pas d’un groupe d’âge à l’autre, mais les stratégies des mouvements oculaires divergent, ce qui indique que diverses approches sont possibles pour atteindre une performance optimale. En résumé, les études cartographient les trajectoires complexes et non uniformes du développement de la reconnaissance des expressions faciales en comparant l'utilisation des informations visuelles depuis la petite enfance jusqu'à l'âge adulte. Les études montrent non seulement dans quelle mesure la reconnaissance des expressions faciales se développe avec l’âge, mais aussi comment cette expression est obtenue tout au long du développement en déterminant si les stratégies perceptuelles sont similaires à travers les âges et à quel stade elles peuvent être considérées comme matures. Les études visaient à fournir la base d’une compréhension du développement continu de la reconnaissance des expressions faciales, qui faisait auparavant défaut dans la littérature. Les travaux futurs visent à approfondir cette compréhension en examinant comment la reconnaissance des expressions se développe en relation avec d'autres aspects du traitement cognitif et émotionnel ce qui pourrait permettre d'éclaircir si des aspects neuro-développementaux seraient à l’origine de la dépression présente entre 7-8 et 11-14 ans lorsque l’on compare les stratégies de fixations des enfants à celles des adultes.In this thesis, I aimed to map the development of facial expression recognition from early childhood up to adulthood by identifying for the first time in the literature the quantity and quality of visual information needed to recognise the six 'basic' emotions. Using behavioural and eye tracking measures, the original contributions of this thesis include: 1. An unbiased fine-grained mapping of the continued development of facial expression recognition for the six basic emotions with the introduction of a psychophysical measure to the literature; 2. The identification of two main phases in the development of facial expression recognition, ranging from 5 to 12 years old and 13 years old to adulthood; 3. The quantity of signal and intensity information needed to recognise the six basic emotions across development; 4. The processing of signal and intensity information becomes more discriminative during development as less information is needed with age to recognise anger, disgust, surprise and sadness; 5. Novel analysis of response profiles (the sequence of responses across trials) revealed subtle but important changes in the sequence of responses along a continuum of age - profiles become more similar with age due to less random erroneous categorizations; 6. The comparison of two recognition measures across the same cohort revealing that two types of stimuli commonly used in facial emotion processing studies (expressions at full intensity vs. expressions of varying intensities) cannot be straightforwardly compared during development; 7. Novel eye movement analyses revealed the age at which perceptual strategies for the recognition of facial expressions of emotion become mature. An initial review of the literature revealed several less studied areas of the development of facial expression recognition, which I chose to focus on for my thesis. Firstly, at the outset of this thesis there were no studies of the continued development of facial expression recognition from early childhood up to adulthood. Similarly, there were no studies which examined all six of, what are termed, the 'basic emotions' and a neutral expression within the same paradigm. Therefore, the objective of the first study was to provide a fine-grained mapping of the continued development for all six basic expressions and neutral from the age of 5 up to adulthood by introducing a novel psychophysical method to the developmental literature. The psychophysical adaptive staircase procedure provided a precise measure of recognition performance across development. Using linear regression, we then charted the developmental trajectories for recognition of each of the 6 basic emotions and neutral. This mapping of recognition across development revealed expressions that showed a steep improvement with age – disgust, neutral, and anger; expressions that showed a more gradual improvement with age – sadness, surprise; and those that remained stable from early childhood – happiness and fear; indicating that the coding for these expressions is already mature by 5 years of age. Two main phases were identified in the development of facial expression recognition as recognition thresholds were most similar between the ages of 5 to 12 and 13 to adulthood. In the second study we aimed to take this fine-grained mapping of the development of facial expression recognition further by quantifying how much visual information is needed to recognise an expression across development by comparing two measures of visual information, signal and intensity. Again, using a psychophysical approach, this time with a repeated measures design, the quantity of signal and intensity needed to recognise sad, angry, disgust, and surprise expressions decreased with age. Therefore, the processing of both types of visual information becomes more discriminative during development as less information is needed with age to recognize these expressions. Mutual information analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, expressions at full intensity (as in the signal condition) and expressions of varying intensities (as in the intensity condition) cannot be straightforwardly compared during development. While the first two studies of this thesis addressed how much visual information is needed to recognise an expression across development, the aim of the third study was to investigate which information is used across development to recognise an expression using eye-tracking. We recorded the eye movements of children from the age of 5 up to adulthood during recognition of the six basic emotions using natural viewing and gaze-contingent conditions. Multivariate statistical analysis of the eye movement data across development revealed the age at which perceptual strategies for the recognition of facial expressions of emotion become mature. The eye movement strategies of the oldest adolescent group, 17- to 18-year-olds, were most similar to adults for all expressions. A developmental dip in strategy similarity to adults was found for each emotional expression between 11- to 14-years, and slightly earlier, 7- to 8-years, for happiness. Finally, recognition accuracy for happy, angry, and sad expressions did not differ across age groups but eye movement strategies diverged, indicating that diverse approaches are possible for reaching optimal performance. In sum, the studies map the intricate and non-uniform trajectories of the development of facial expression recognition by comparing visual information use from early childhood up to adulthood. The studies chart not only how well recognition of facial expressions develops with age, but also how facial expression recognition is achieved throughout development by establishing whether perceptual strategies are similar across age and at what stage they can be considered mature. The studies aimed to provide the basis of an understanding of the continued development of facial expression recognition which was previously lacking from the literature. Future work aims to further this understanding by investigating how facial expression recognition develops in relation to other aspects of cognitive and emotional processing and to investigate the potential neurodevelopmental basis of the developmental dip found in fixation strategy similarity

    The Psychology of Epistemic Judgment

    Get PDF
    Human social intelligence includes a remarkable power to evaluate what people know and believe, and to assess the quality of well- or ill-formed beliefs. Epistemic evaluations emerge in a great variety of contexts, from moments of deliberate private reflection on tough theoretical questions, to casual social observations about what other people know and think. We seem to be able to draw systematic lines between knowledge and mere belief, to distinguish justified and unjustified beliefs, and to recognize some beliefs as delusional or irrational. This article outlines the main types of epistemic evaluations, and examines how our capacities to perform these evaluations develop, how they function at maturity, and how they are deployed in the vital task of sorting out when to believe what others say
    corecore