12 research outputs found

    Visual Intuitions in the Absence of Visual Experience: The Role of Direct Experience in Concreteness and Imageability Judgements

    Get PDF
    The strongest formulations of grounded cognition assume that perceptual intuitions about concepts involve the re-activation of sensorimotor experience we have made with their referents in the world. Within this framework, concreteness and imageability ratings are indeed of crucial importance by operationalising the amount of perceptual interaction we have made with objects. Here we tested such an assumption by asking whether visual intuitions about concepts are provided accurately even when direct visual experience is absent. To this aim, we considered concreteness and imageability intuitions in blind people and tested whether these judgments are predicted by Image-based Frequency (IF, i.e. a data-driven estimate approximating the availability of the word referent in the visual environment). Results indicated that IF predicts perceptual intuitions with a larger extent in sighted compared to blind individuals, thus suggesting a role of direct experience in shaping our judgements. However, the effect of IF was significant not only in sighted but also in blind individuals. This indicates that having direct visual experience with objects does not play a critical role in making them concrete and imageable in a person’s intuitions: people do not need visual experience to develop intuition about the availability of things in the external visual environment and use this intuition to inform concreteness/imageability judgments. Our findings fit closely the idea that perceptual judgments are the outcome of introspection/abstraction tasks invoking high-level conceptual knowledge that is not necessarily acquired via direct perceptual experience

    The Flickr frequency norms: what 17 years of images tagged online tell us about lexical processing

    No full text
    Word frequency is one of the best predictors of language processing. Typically, word frequency norms are entirely based on natural-language text data, thus representing what the literature typically refers to as purely linguistic experience. This study presents Flickr frequency norms as a novel word frequency measure from a domain-specific corpus inherently tied to extra-linguistic information: words used as image tags on social media. To obtain Flickr frequency measures, we exploited the photo-sharing platform Flickr Image (containing billions of photos) and extracted the number of uploaded images tagged with each of the words considered in the lexicon. Here we systematically examine the peculiarities of Flickr frequency norms and show that Flickr frequency is a hybrid metric, lying at the intersection between language and visual experience and with specific biases induced by being based on image-focused social media. Moreover, regression analyses indicate that Flickr frequency captures additional information beyond what is already encoded in existing norms of linguistic, sensorimotor, and affective experience. Therefore, these new norms capture aspects of language usage that are missing from traditional frequency measures: a portion of language usage capturing the interplay between language and vision, which – this study demonstrates - has its own impact on word processing. The Flickr frequency norms are openly available on the Open Science Framework (https://osf.io/2zfs3/).</p

    Sustained-Paced Finger Tapping: A Novel Approach to Measure Internal Sustained Attention

    No full text
    Sustained attention is a fundamental prerequisite for all cognitive functions and its impairment is a common aftermath of both developmental and acquired neurological disorders. To date, all the sustained attention tasks rely heavily on selective attention to external stimuli. The interaction between selective and sustained attention represents a limit in the field of assessment and may mislead researchers or distort conclusions. The aim of the present perspective study was to propose a sustained version of the Paced Finger Tapping (S-PFT) test as a novel approach to measure sustained attention that does not leverage external stimuli. Here, we administered S-PFT and other attentional tasks (visual sustained attention, visuospatial attention capacity, selective attention, and divided attention tasks) to 85 adolescents. Thus, we provide evidence suggesting that S-PFT is effective in causing performance decrement over time, an important trademark of sustained attention tasks. We also present descriptive statistics showing the relationship between S-PFT and the other attentional tasks. These analyses show that, unlike visual sustained attention tests, performances to our task of internal sustained attention were not correlated to measures of selective attention and visuospatial attention capacity. Our results suggest that S-PFT could represent a promising and alternative tool both for empirical research and clinical assessment of sustained attention

    ViSpa (Vision Spaces): A computer-vision-based representation system for individual images and concept prototypes, with large-scale evaluation

    No full text
    Quantitative, data-driven models for mental representations have long enjoyed popularity and success in psychology (for example, distributional semantic models in the language domain), but have largely been missing for the visual domain. To overcome this, we present ViSpa (Vision Spaces), high-dimensional vector spaces that include vision-based representation for naturalistic images as well as concept prototypes. These vectors are derived directly from visual stimuli through a deep convolutional neural network (DCNN) trained to classify images, and allow us to compute vision-based similarity scores between any pair of images and/or concept prototypes. We successfully evaluate these similarities against human behavioral data in a series of large-scale studies, including off-line judgments – visual similarity judgments for the referents of word pairs (Study 1) and for image pairs (Study 2), and typicality judgments for images given a label (Study 3) – as well as on-line processing times and error rates in a discrimination (Study 4) and priming task (Study 5) with naturalistic image material. ViSpa similarities predict behavioral data across all tasks, which renders ViSpa a theoretically appealing model for vision-based representations and a valuable research tool for data analysis and the construction of experimental material: ViSpa allows for precise control over experimental material consisting of images (also in combination with words), and introduces a specifically vision-based similarity for word pairs. To make ViSpa available to a wide audience, this article a) includes (video) tutorials on how to use ViSpa in R, and b) presents a user-friendly web interface at http://vispa.fritzguenther.de.</p

    Data-driven computational models reveal perceptual simulation in word processing

    No full text
    In their strongest formulation, theories of grounded cognition claim that concepts are made up of sensorimotor information. Following such equivalence, perceptual properties of objects should consistently influence processing, even in purely linguistic tasks, where perceptual information is neither solicited nor required. Previous studies have tested this prediction in semantic priming tasks, but they have not observed perceptual influences on participants’ performances. However, those findings suffer from critical shortcomings, which may have prevented potential visually grounded/perceptual effects from being detected. Here, we investigate this topic by applying an innovative method expected to increase the sensitivity in detecting such perceptual effects. Specifically, we adopt an objective, data-driven, computational approach to independently quantify vision-based and language-based similarities for prime-target pairs on a continuous scale. We test whether these measures predict behavioural performance in a semantic priming mega-study with various experimental settings. Vision-based similarity is found to facilitate performance, but a dissociation between vision-based and language-based effects was also observed. Thus, in line with theories of grounded cognition, perceptual properties can facilitate word processing even in purely linguistic tasks, but the behavioural dissociation at the same time challenges strong claims of sensorimotor and conceptual equivalence.</p

    Automated scoring for a Tablet-based Rey Figure copy task differentiates constructional, organisational, and motor abilities

    No full text
    AbstractAccuracy in copying a figure is one of the most sensitive measures of visuo-constructional ability. However, drawing tasks also involve other cognitive and motor abilities, which may influence the final graphic produced. Nevertheless, these aspects are not taken into account in conventional scoring methodologies. In this study, we have implemented a novel Tablet-based assessment, acquiring data and information for the entire execution of the Rey Complex Figure copy task (T-RCF). This system extracts 12 indices capturing various dimensions of drawing abilities. We have also analysed the structure of relationships between these indices and provided insights into the constructs that they capture. 102 healthy adults completed the T-RCF. A subgroup of 35 participants also completed a paper-and-pencil drawing battery from which constructional, procedural, and motor measures were obtained. Principal component analysis of the T-RCF indices was performed, identifying spatial, procedural and kinematic components as distinct dimensions of drawing execution. Accordingly, a composite score for each dimension was determined. Correlational analyses provided indications of their validity by showing that spatial, procedural, and kinematic scores were associated with constructional, organisational and motor measures of drawing, respectively. Importantly, final copy accuracy was found to be associated with all of these aspects of drawing. In conclusion, copying complex figures entails an interplay of multiple functions. T-RCF provides a unique opportunity to analyse the entire drawing process and to extract scores for three critical dimensions of drawing execution.</jats:p

    Tablet-based Rey–Osterrieth Complex Figure copy task: a novel application to assess spatial, procedural, and kinematic aspects of drawing in children

    No full text
    Abstract The paper-and-pencil Rey–Osterrieth Complex Figure (ROCF) copy task has been extensively used to assess visuo-constructional skills in children and adults. The scoring systems utilized in clinical practice provide an integrated evaluation of the drawing process, without differentiating between its visuo-constructional, organizational, and motor components. Here, a tablet-based ROCF copy task capable of providing a quantitative assessment of the drawing process, differentiating between visuo-constructional, organizational, and motor skills, is trialed in 94 healthy children, between 7 and 11 years of age. Through previously validated algorithms, 12 indices of performance in the ROCF copy task were obtained for each child. Principal component analysis of the 12 indices identified spatial, procedural, and kinematic components as distinct dimensions of the drawing process. A composite score for each dimension was determined, and correlation analysis between composite scores and conventional paper-and-pencil measures of visuo-constructional, procedural, and motor skills performed. The results obtained confirmed that the constructional, organizational, and motor dimensions underlie complex figure drawing in children; and that each dimension can be measured by a unique composite score. In addition, the composite scores here obtained from children were compared with previsions results from adults, offering a novel insight into how the interplay between the three dimensions of drawing evolves with age
    corecore