22 research outputs found

    Linking design intention and users' interpretation through image schemas

    Get PDF
    Usability is often defined as the ease of use of a product but this definition does not capture other important characteristics related to the product design as being effective, efficient, engaging, errorfree and easy to learn. Usability is not only about measuring how people use a product but more importantly, it is about exploring the relationship between how designers have intended their products to be used and how users interpret these designs. Previous research has shown the feasibility of using image schemas to evaluate intuitive interactions. This paper extends previous research by proposing a method, which uses image schemas to evaluate usability by measuring the gap between design intention and users’ interpretations of the design. The design intention is extracted from the user manual while the way users interpret the design features is captured using direct observation, think aloud protocol and a structured questionnaire. The proposed method is illustrated with a case study involving 42 participants. The results show close correlation between usability and the distance between design intent and users’ interpretation

    Visual and auditory perceptual strength norms for 3,596 French nouns and their relationship with other psycholinguistic variables

    Full text link
    Perceptual experience plays a critical role in the conceptual representation of words. Higher levels of semantic variables such as imageability, concreteness, and sensory experience are generally associated with faster and more accurate word processing. Nevertheless, these variables tend to be assessed mostly on the basis of visual experience. This underestimates the potential contributions of other perceptual modalities. Accordingly, recent evidence has stressed the importance of providing modality-specific perceptual strength norms. In the present study, we developed French Canadian norms of visual and auditory perceptual strength (i.e., the modalities that have major impact on word processing) for 3,596 nouns. We then explored the relationship between these newly developed variables and other lexical, orthographic, and semantic variables. Finally, we demonstrated the contributions of visual and auditory perceptual strength ratings to visual word processing beyond those of other semantic variables related to perceptual experience (e.g., concreteness, imageability, and sensory experience ratings). The ratings developed in this study are a meaningful contribution toward the implementation of new studies that will shed further light on the interaction between linguistic, semantic, and perceptual systems

    Sensorimotor Norms: Perception and Action Strength norms for 40,000 words

    Get PDF
    Sensorimotor information plays a fundamental role in cognition. However, datasets of ratings of sensorimotor experience have generally been restricted to several hundred words, leading to limited linguistic coverage and reduced statistical power for more complex analyses. Here, we present modality-specific and effector-specific norms for 39,954 concepts across six sensory modalities (touch, hearing, smell, taste, vision, and interoception) and five action effectors (mouth/throat, hand/arm, foot/leg, head excluding mouth, and torso), which were gathered from 4,557 participants who completed a total of 32,456 surveys using Amazon's Mechanical Turk platform. The dataset therefore represents one of the largest set of semantic norms currently available. We describe the data collection procedures, provide summary descriptives of the data set, demonstrate the utility of the norms in predicting lexical decision times and accuracy, as well as offering new insights and outlining avenues for future research. Our findings will be of interest to researchers in embodied cognition, cognitive semantics, sensorimotor processing, and the psychology of language generally. The scale of this dataset will also facilitate computational modelling and big data approaches to the analysis of language and conceptual representations

    Feeling better:Tactile verbs speed up tactile detection

    Get PDF
    International audienceEmbodiment of action-related language into the motor system has been extensively documented. Yet the case of sensory words, especially referring to touch, remains overlooked. We investigated the influence of verbs denoting tactile sensations on tactile perception. In Experiment 1, participants detected tactile stimulations on their forearm, preceded by tactile or non-tactile verbs by one of three delays (170, 350, 500ms) reflecting different word processing stages. Results revealed shorter reaction times to tactile stimulations following tactile than non-tactile verbs, irrespective of delay. To ensure that priming pertained to tactile, and not motor, verb properties, Experiment 2 compared the impact of tactile verbs to both action and non-tactile verbs, while stimulations were delivered on the index finger. No priming emerged following action verbs, therefore not supporting the motor-grounded interpretation. Facilitation by tactile verbs was however not observed, possibly owing to methodological changes. Experiment 3, identical to Experiment 2 except that stimulation was delivered to participants’ forearm, replicated the priming effect. Importantly, tactile stimulations were detected faster after tactile than after both non-tactile and action verbs, indicating that verbs’ tactile properties engaged resources shared with sensory perception. Our findings suggest that language conveying tactile information can activate somatosensory representations and subsequently promote tactile detection

    Effects of spatial language cues on attention and the perception of ambiguous images

    Get PDF
    It’s a bird! It’s a plane! It’s superman!? Sometimes there are things in our world that are ambiguous. An ambiguous object, for the purposes of this thesis is any object that has more than one interpretation to it. The brain is designed to “fill in the blanks” and make sense of the world. Thus it will use anything available, like language, to help in resolving the ambiguity. Language can change how we perceive information in the world (Dils & Boroditsky, 2010) and where we direct our attention (Ostarek & Vigliocco, 2017; Estes et. al. 2008; Estes, Verges, Adelman, 2015). Language can play a role in resolving ambiguity by directing attention in certain directions. For example, if I say “upward” and you see something in the sky, you might be inclined to perceive items that are typical in that location (e.g. bird and plane) as compared to atypical items (e.g. wrench) (Estes, Verges, & Adelman, 2015; Estes, Verges, & Barsalou, 2008). However, to date, no study has investigated whether it is possible that such spatial language cues (like “upwards” and “downwards”) can affect the interpretation of an ambiguous stimulus. The aim of this thesis is to explore the effect of spatial language cues on the perception of ambiguous images

    Toward a more embedded/extended perspective on the cognitive function of gestures

    Get PDF
    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures

    The Lancaster Sensorimotor Norms : Multidimensional measures of Perceptual and Action Strength for 40,000 English words

    Get PDF
    Sensorimotor information plays a fundamental role in cognition. However, the existing materials that measure the sensorimotor basis of word meanings and concepts have been restricted in terms of their sample size and breadth of sensorimotor experience. Here we present norms of sensorimotor strength for 39,707 concepts across six perceptual modalities (touch, hearing, smell, taste, vision, and interoception) and five action effectors (mouth/throat, hand/arm, foot/leg, head excluding mouth/throat, and torso), gathered from a total of 3,500 individual participants using Amazon’s Mechanical Turk platform. The Lancaster Sensorimotor Norms are unique and innovative in a number of respects: They represent the largest-ever set of semantic norms for English, at 40,000 words × 11 dimensions (plus several informative cross-dimensional variables), they extend perceptual strength norming to the new modality of interoception, and they include the first norming of action strength across separate bodily effectors. In the first study, we describe the data collection procedures, provide summary descriptives of the dataset, and interpret the relations observed between sensorimotor dimensions. We then report two further studies, in which we (1) extracted an optimal single-variable composite of the 11-dimension sensorimotor profile (Minkowski 3 strength) and (2) demonstrated the utility of both perceptual and action strength in facilitating lexical decision times and accuracy in two separate datasets. These norms provide a valuable resource to researchers in diverse areas, including psycholinguistics, grounded cognition, cognitive semantics, knowledge representation, machine learning, and big-data approaches to the analysis of language and conceptual representations. The data are accessible via the Open Science Framework (http://osf.io/7emr6/) and an interactive web application (https://www.lancaster.ac.uk/psychology/lsnorms/)

    An embodied approach to language comprehension in probable Alzheimer’s Disease: could perceptuo-motor processing be a key to better understanding?

    Get PDF
    One of the central tenets of the embodied theory of language comprehension is that the process of understanding prompts the same perceptuo-motor activity involved in actual perception and action. This activity is a component of comprehension that is not memory–dependent and is hypothesized to be intact in Alzheimer’s Disease (AD). Each article in this thesis is aimed at answering the question whether individuals with probable AD, healthy older adults and younger adults show differences in their performance on tests where perceptual and motoric priming take place during language comprehension. The second question each article asks is whether language comprehension in AD can be facilitated by the specific use of this perceptual and motoric priming. Article I examines whether the way individuals with pAD represent verbs spatially matches the way healthy older and younger adults do, and how stable these representations are. It also explores in what way spatial representations may relate to verb comprehension, more specifically, whether representations matching the norms translate into a better quality of verb comprehension. Article II tests the interaction between the verbs’ spatial representations taking place during comprehension and perceptual cues - compatible and incompatible to the representations - in order to investigate whether individuals with pAD show differences in susceptibility to perceptual cues, compared to healthy older and younger participants. The second aim of this article is to explore in what way performance on a word-picture verification task can be affected, with reference to the fact that in previous studies on young participants, both priming and interference have resulted from the interaction of linguistic and perceptual processing. Article III explores the Action Compatibility Effect (ACE) (Glenberg & Kaschak, 2002) with the aim of finding out whether the ACE exists for volunteers with pAD and whether it can facilitate language comprehension. The order of presentation of language and movement is manipulated to establish whether there is a reciprocal relationship between them. This information could be crucial in view of possible applications to individuals with pAD. These articles test, for the first time, the effects of the manipulation of the perceptuo-motor component during language comprehension in individuals with pAD; they are intended as a methodological exploration contributing to a better understanding of the potential of embodiment principles to support language comprehension changes associated with pAD. Embodiment effects need to be studied further with a view to putting them to use in either clinical or real-life applications

    Social cognitive consequences of differences in the emotional grounding of concepts: the role of embodiment

    Get PDF
    American Psychological Association (PsycINFO Classification Categories and Codes): 2300 Human Experimental Psychology; 2340 Cognitive Processes; 2560 Psychophysiology; 2720 Linguistics & Language & Speech; 3000 Social PsychologyThe present work examines the affective grounding of first-native (L1) and secondlearned (L2) languages, and how they differently impact intra-individual, inter-individual and intergroup processes. In the first chapter we framed our work in the Socially Situated Cognition approach, and proposed the application of its assumptions to linguistic communication. In the second chapter we reviewed literature showing the differences in processing L1-L2, and concluded that these languages are not likely to be grounded in the same way. In the first empirical chapter we examined this assumption in two affective priming experiments. Congruency effects were observed only in L1 for prime/target word pairs, and in L1-L2 for pairs of word/photos (facial expressions). These results suggest different groundings of L1-L2, and that the presence of facial expressions, that facilitate affective simulation processes, may overrule L2 constraints. The second set of three experiments revealed that L2 induces social distance and a more abstract type of processing. Moreover, the social distance induced by L2 was mediated by a more abstract construal-level that is consistent with the disembodied nature of L2. The last set of two experiments indicates that the evaluation of sentences with affective content, presented in L1-L2, depends on their valence and on the group membership of the described targets. Affective simulation (measured with EMG) was more intense in L1, and for the in-group, and differences in simulation of in-group/out-group sentences were enhanced in L2. The last chapter presents a summary of the main findings, their contributions and limitations, and suggests future research directions.O presente trabalho examina a ancoragem afectiva da língua-nativa (L1) e da segunda-língua (L2), e como estas influenciam de forma diferente processos intraindividuais, inter-individuais e intergrupais. No primeiro capítulo enquadramos o trabalho na abordagem da Cognição Social Situada propondo a aplicação das suas premissas à comunicação linguística. No segundo capítulo revemos estudos que mostram diferenças no processamento de L1-L2 concluíndo que, provavelmente, estas línguas não são corporalizadas da mesma maneira. No primeiro capítulo empírico examinamos esta premissa em dois experimentos de primação afectiva. Observámos efeitos de congruência apenas em L1 para pares de palavras primo-alvo, e em L1-L2 para pares de palavras/fotos (expressões faciais). Estes resultados sugerem diferenças na ancoragem afectiva de L1-L2 e que a presença de expressões faciais, facilitadoras de processos de simulação afectiva, anula os constrangimentos impostos por L2. O segundo conjunto de três experimentos revelou que L2 induz distância social e um processamento mais abstracto. Para além disso, a distância social induzida por L2 foi mediada por um construal-level mais abstracto, o que é consistente com a natureza descorporalizada de L2. No último conjunto de dois experimentos observou-se que a avaliação de frases de conteúdo afectivo, apresentadas em L1-L2, depende da sua valência e da pertença grupal dos alvos descritos. A simulação afectiva (medida com EMG) foi mais intensa em L1, e para o in-group, e as diferenças na simulação de frases do in-group/outgroup foram realçadas em L2. O último capítulo apresenta os resultados principais, seus contributos e limitações, e sugestões para investigação futura

    From Amodal to Grounded to Hybrid Accounts of Knowledge: New Evidence from the Investigation of the Modality-Switch Effect

    Get PDF
    My dissertation sets out to contribute to the ongoing theoretical debate on the format of conceptual representation from both a theoretic (Part 1) and an experimental point of view (Part 2). From a theoretic point of view, it is attempted to show that the amodal and grounded views do not bear incompatible claims. On the contrary, grounded cognitition has complemented traditional approaches taking into account the modalities, the body, and the environment’s influence on cognitive mechanisms. From an experimental point of view, this dissertation is committed to testing predictions coming from grounded accounts of knowledge. Specifically, it is aimed at verifying the assumption that modality-specific representations underlie concepts and conceptual processing through the investigation of the Modality-Switch Effect, a cost for performance in terms of speed and accuracy occurring when two different sensory modality properties for concepts alternate compared to when the same sensory modality property is presented. Four experiments were conducted. Experiments 1 & 2 (Study 1) allowed the author to demonstrate that the Modality-Switch Effect is an automatic robust effect arising during both reading and speech processing. Experiments 3 & 4 (Study 2) assessed the impact of the mode of presentation of stimuli (i.e., visual: through the monitor, aural: through a pair of headphones) on the conceptual Modality-Switch Effect. It is shown that the mode of presentation effect weakens the conceptual Modality-Switch Effect in both a property verification and a lexical decision priming paradigms. In sum, the extensive analysis of amodal and grounded views taken together with the innovative findings reported in this dissertation led the author to suggest that hybrid approaches, that combine aspects of both views, should be preferred over the amodal-only and grounded-only accounts
    corecore