15 research outputs found

    Inversion Leads to Quantitative, Not Qualitative, Changes in Face Processing

    Get PDF
    AbstractHumans are remarkably adept at recognizing objects across a wide range of views. A notable exception to this general rule is that turning a face upside down makes it particularly difficult to recognize [1–3]. This striking effect has prompted speculation that inversion qualitatively changes the way faces are processed. Researchers commonly assume that configural cues strongly influence the recognition of upright, but not inverted, faces [3–5]. Indeed, the assumption is so well accepted that the inversion effect itself has been taken as a hallmark of qualitative processing differences [6]. Here, we took a novel approach to understand the inversion effect. We used response classification [7–10] to obtain a direct view of the perceptual strategies underlying face discrimination and to determine whether orientation effects can be explained by differential contributions of nonlinear processes. Inversion significantly impaired performance in our face discrimination task. However, surprisingly, observers utilized similar, local regions of faces for discrimination in both upright and inverted face conditions, and the relative contributions of nonlinear mechanisms to performance were similar across orientations. Our results suggest that upright and inverted face processing differ quantitatively, not qualitatively; information is extracted more efficiently from upright faces, perhaps as a by-product of orientation-dependent expertise

    Use of a real-life practical context changes the relationship between implicit body representations and real body measurements

    Get PDF
    A mismatch exists between people’s mental representations of their own body and their real body measurements, which may impact general well-being and health. We investigated whether this mismatch is reduced when contextualizing body size estimation in a real-life scenario. Using a reverse correlation paradigm, we constructed unbiased, data-driven visual depictions of participants’ implicit body representations. Across three conditions—own abstract, ideal, and own concrete body— participants selected the body that looked most like their own, like the body they would like to have, or like the body they would use for online shopping. In the own concrete condition only, we found a significant correlation between perceived and real hip width, suggesting that the perceived/real body match only exists when body size estimation takes place in a practical context, although the negative correlation indicated inaccurate estimation. Further, participants who underestimated their body size or who had more negative attitudes towards their body weight showed a positive correlation between perceived and real body size in the own abstract condition. Finally, our results indicated that different body areas were implicated in the different conditions. These findings suggest that implicit body representations depend on situational and individual differences, which has clinical and practical implications.LDC was supported by Ministerio de Ciencia, Innovación y Universidades Juan de la Cierva-Incorporación Grant IJC2018-038347-I and the CONEX-Plus programme funded by Universidad Carlos III de Madrid and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 801538. ATJ was supported by Ministerio de Economía, Industria y Competitividad of Spain Ramón y Cajal Grant RYC-2014-15421. This research was partly funded by the Spanish Agencia Estatal de Investigación (PID2019-105579RB-I00/AEI/10.13039/501100011033). The authors would like to thank Martin Mojica-Benavides for his help in preparing the psychometric curve analyses

    Point-of-gaze analysis reveals visual search strategies

    Full text link

    Quantifying the informational value of classification images

    Get PDF
    Reverse correlation is an influential psychophysical paradigm that uses a participant’s responses to randomly varying images to build a classification image (CI), which is commonly interpreted as a visualization of the participant’s mental representation. It is unclear, however, how to statistically quantify the amount of signal present in CIs, which limits the interpretability of these images. In this article, we propose a novel metric, infoVal, which assesses informational value relative to a resampled random distribution and can be interpreted like a z score. In the first part, we define the infoVal metric and show, through simulations, that it adheres to typical Type I error rates under various task conditions (internal validity). In the second part, we show that the metric correlates with markers of data quality in empirical reverse-correlation data, such as the subjective recognizability, objective discriminability, and test–retest reliability of the CIs (convergent validity). In the final part, we demonstrate how the infoVal metric can be used to compare the informational value of reverse-correlation datasets, by comparing data acquired online with data acquired in a controlled lab environment. We recommend a new standard of good practice in which researchers assess the infoVal scores of reverse-correlation data in order to ensure that they do not read signal in CIs where no signal is present. The infoVal metric is implemented in the open-source rcicr R package, to facilitate its adoption

    What surprises the Mona Lisa? The relative importance of the eyes and eyebrows for detecting surprise in briefly presented face stimuli

    Get PDF
    The classification image (CI) technique has been used to derive templates for judgements of facial emotion and reveal which facial features inform specific emotional judgements. For example, this method has been used to show that detecting an up- or down-turned mouth is a primary strategy for discriminating happy versus sad expressions. We explored the detection of surprise using CIs, expecting widened eyes, raised eyebrows, and open mouths to be dominant features. We briefly presented a photograph of a female face with a neutral expression embedded in random visual noise, which modulated the appearance of the face on a trial-by-trial basis. In separate sessions, we showed this face with or without eyebrows to test the importance of the raised eyebrow element of surprise. Noise samples were aggregated into CIs based on participant responses. Results show that the eye-region was most informative for detecting surprise. Unless attention was specifically directed to the mouth, we found no effects in the mouth region. The eye effect was stronger when the eyebrows were absent, but the eyebrow region was not itself informative and people did not infer eyebrows when they were missing. A follow-up study was conducted in which participants rated the emotional valence of the neutral images combined with their associated CIs. This verified that CIs for 'surprise' convey surprised expressions, while also showing that CIs for 'not surprise' convey disgust. We conclude that the eye-region is important for the detection of surprise. [Abstract copyright: Copyright © 2023 The Author(s). Published by Elsevier Ltd.. All rights reserved.

    Audiovisual Integration of Time-to-Contact Information for Approaching Objects

    Get PDF
    Abstract Previous studies of time-to-collision (TTC) judgments of approaching objects focused on effectiveness of visual TTC information in the optical expansion pattern (e.g., visual tau, disparity). Fewer studies examined effectiveness of auditory TTC information in the pattern of increasing intensity (auditory tau), or measured integration of auditory and visual TTC information. Here, participants judged TTC of an approaching object presented in the visual or auditory modality, or both concurrently. TTC information provided by the modalities was jittered slightly against each other, so that auditory and visual TTC were not perfectly correlated. A psychophysical reverse correlation approach was used to estimate the influence of auditory and visual cues on TTC estimates. TTC estimates were shorter in the auditory than the visual condition. On average, TTC judgments in the audiovisual condition were not significantly different from judgments in the visual condition. However, multiple regression analyses showed that TTC estimates were based on both auditory and visual information. Although heuristic cues (final sound pressure level, final optical size) and more reliable information (relative rate of change in acoustic intensity, optical expansion) contributed to auditory and visual judgments, the effect of heuristics was greater in the auditory condition. Although auditory and visual information influenced judgments, concurrent presentation of both did not result in lower response variability compared to presentation of either one alone; there was no multimodal advantage. The relative weightings of heuristics and more reliable information differed between auditory and visual TTC judgments, and when both were available, visual information was weighted more heavily

    Multiple decisions about one object involve parallel sensory acquisition but time-multiplexed evidence incorporation.

    Get PDF
    The brain is capable of processing several streams of information that bear on different aspects of the same problem. Here, we address the problem of making two decisions about one object, by studying difficult perceptual decisions about the color and motion of a dynamic random dot display. We find that the accuracy of one decision is unaffected by the difficulty of the other decision. However, the response times reveal that the two decisions do not form simultaneously. We show that both stimulus dimensions are acquired in parallel for the initial ∼0.1 s but are then incorporated serially in time-multiplexed bouts. Thus, there is a bottleneck that precludes updating more than one decision at a time, and a buffer that stores samples of evidence while access to the decision is blocked. We suggest that this bottleneck is responsible for the long timescales of many cognitive operations framed as decisions

    What you see is what you feel:Top-down emotional effects in face detection

    Get PDF
    Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Furthermore, whilst emotional state has been found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. In four studies and a computational model, we investigated how emotions affect mental representations of faces and how facial representations could be used to transmit and communicate people’s emotional states. To this end, we used an adapted reverse correlation techniquesuggested by Gill et al. (2019) which was based on an earlier idea of the ‘Superstitious Approach’ (Gosselin & Schyns, 2003). In Experiment 1 we measured how naturally occurring anxiety and depression, caused by external factors, affected people’s mental representations of faces. In two sessions, on separate days, participants (coders) were presented with ‘colourful’ visual noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified by the coders as a face, we reconstructed the pictorial mental representation utilised by each participant in the identification process. Across coders, we found significant correlations between changes in the size of the mental representation of faces and changes in their level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations can reflect their emotional state, we conducted a validation study (Experiment 2) with a group of naïve participants (verifiers) who were asked to classify the reconstructed mental representations of faces by emotion. Thus, we assessed whether the mental representations communicate coders’ emotional states to others. The analysis showed no significant correlation between coders’ emotional states, depicted in their mental representation of faces and verifiers’ evaluation scores. In Experiment 3, we investigated how different induced moods, negative and positive, affected mental representation of faces. Coders underwent two different mood induction conditions during two separate sessions. They were presented with the same ‘colourful’ noise stimuli used in Experiment 1 and asked to detect faces. We were able to reconstruct pictorial mental representations of faces based on the identified fragments. The analysis showed a significant negative correlation between changes in coders’ mood along the dimension of arousal and changes in size of their mental representation of faces. Similar to Experiment 2, we conducted a validation study (Experiment 4) to investigate if coders’ mood could have been communicated to others through their mental representations of faces. Similarly, to Experiment 2, we found no correlation between coders’ mood, depicted in their mental representations of faces and verifiers’ evaluation of the intensity of transmitted emotional expression. Lastly, we tested a preliminary computational model (Experiment 5) to classify and predict coders’ emotional states based on their reconstructed mental representations of faces. In spite of the small number of training examples and the high dimensionality of the input, the model was successful just above chance level. Future studies should look at the possibility of improving the computational model by using a larger training set and testing other classifiers. Overall, the present work confirmed the presence of facial templates used during face detection. It provides an adapted version of a reverse correlation technique that can be used to access mental representation of faces, with a significant reduction in number of trials. Lastly, it provides evidence on how emotions can influence and affect the size of mental representations of faces
    corecore