7,223 research outputs found

    The role of phonological and executive working memory resources in simple arithmetic strategies

    Get PDF
    The current study investigated the role of the central executive and the phonological loop in arithmetic strategies to solve simple addition problems (Experiment 1) and simple subtraction problems (Experiment 2). The choice/no-choice method was used to investigate strategy execution and strategy selection independently. The central executive was involved in both retrieval and procedural strategies, but played a larger role in the latter than in the former. Active phonological processes played a role in procedural strategies only. Passive phonological resources, finally, were only needed when counting was used to solve subtraction problems. No effects of working memory load on strategy selection were observed

    Listeners normalize speech for contextual speech rate even without an explicit recognition task

    No full text
    Speech can be produced at different rates. Listeners take this rate variation into account by normalizing vowel duration for contextual speech rate: An ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Whilst some have argued that this rate normalization involves low-level automatic perceptual processing, there is also evidence that it arises at higher-level cognitive processing stages, such as decision making. Prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. This study tested whether speech rate normalization can be observed without explicit decision making, using a cross-modal repetition priming paradigm. Results show that a fast precursor sentence makes an embedded ambiguous prime (/m?t/) sound (implicitly) more /a:/-like, facilitating lexical access to the long target word "maat" in a (explicit) lexical decision task. This result suggests that rate normalization is automatic, taking place even in the absence of an explicit recognition task. Thus, rate normalization is placed within the realm of everyday spoken conversation, where explicit categorization of ambiguous sounds is rare

    Gaze-based rehearsal in children under 7:A developmental investigation of eye movements during a serial spatial memory task

    Get PDF
    The emergence of strategic verbal rehearsal at around 7 years of age is widely considered a major milestone in descriptions of the development of short‐term memory across childhood. Likewise, rehearsal is believed by many to be a crucial factor in explaining why memory improves with age. This apparent qualitative shift in mnemonic processes has also been characterized as a shift from passive visual to more active verbal mnemonic strategy use, but no investigation of the development of overt spatial rehearsal has informed this explanation. We measured serial spatial order reconstruction in adults and groups of children 5–7 years old and 8–11 years old, while recording their eye movements. Children, particularly the youngest children, overtly fixated late‐list spatial positions longer than adults, suggesting that younger children are less likely to engage in covert rehearsal during stimulus presentation than older children and adults. However, during retention the youngest children overtly fixated more of the to‐be‐remembered sequences than any other group, which is inconsistent with the idea that children do nothing to try to remember. Altogether, these data are inconsistent with the notion that children under 7 do not engage in any attempts to remember. They are most consistent with proposals that children's style of remembering shifts around age 7 from reactive cue‐driven methods to proactive, covert methods, which may include cumulative rehearsal

    Exploring Translation and Interpreting Hybrids. The Case of Sight Translation

    Get PDF
    This article reports on a comparative study of written translation and sight translation, drawing on experimental data combining keystroke logging, eye-tracking and quality ratings of spoken and written output produced by professional translators and interpreters. Major differences in output rate were observed when comparing oral and written modalities. Evaluation of the translation products showed that the lower output rate in the written condition was not justified by significantly higher quality in the written products. Observations from the combination of data sources point to fundamental behavioural differences between interpreters and translators. Overall, working in the oral modality seems to have a lot to offer in terms of saving time and effort without compromising the output quality, and there seems to be a case for increasing the role of oral translation in translator training, incorporating it as a deliberate practice activity.Le prĂ©sent article fait Ă©tat d’une Ă©tude comparative de la traduction Ă©crite et de la traduction Ă  vue. Elle est fondĂ©e sur des donnĂ©es expĂ©rimentales qui associent un enregistrement de la frappe, une Ă©tude oculomĂ©trique ainsi qu’une Ă©valuation de la qualitĂ© de traductions orales et Ă©crites produites par des traducteurs et des interprĂštes professionnels. La comparaison des modalitĂ©s orale et Ă©crite met en Ă©vidence des diffĂ©rences majeures. L’évaluation des traductions montre en effet que le dĂ©bit faible observĂ© pour la traduction Ă©crite ne garantit nullement une qualitĂ© accrue. Par ailleurs, l’analyse comparative fait Ă©tat de diffĂ©rences fondamentales de comportement entre interprĂštes et traducteurs. De façon gĂ©nĂ©rale, la traduction orale semble pouvoir contribuer de façon significative Ă  l’économie de temps et d’effort sans compromettre la qualitĂ©, ce qui justifierait une accentuation de son rĂŽle, et mĂȘme une pleine intĂ©gration, dans la formation des traducteurs

    Conversational topic moderates visual attention to faces in autism spectrum disorder

    Get PDF
    Autism Spectrum Disorder (ASD) is often accompanied by atypical visual attention to faces. Previous studies have identified some predictors of atypical visual attention in ASD but very few have explored the role of conversational context. In this study, the fixation patterns of 19 typically developing (TD) children and 18 children with ASD were assessed during a SKYPED conversation where participants were asked to converse about mundane vs. emotion-laden topics. We hypothesized that 1) children with ASD would visually attend less to the eye region and more to the mouth region of the face compared to TD children and that 2) this effect would be exaggerated in the emotion-laden conversation. With regard to hypothesis 1, we found no difference between groups for either number of fixations or fixation time; however, children with ASD did evidence significantly more off-screen looking time compared to their TD peers. An additional analysis showed that compared to the TD group, the ASD group also had greater average fixation durations when looking at their speaking partner\u27s face (both eyes and mouth) across conversational contexts. In support of hypothesis 2, eye tracking data (corrected for amount of time during conversation) revealed two interaction effects. Compared to the TD group, the ASD group showed 1) a decreased number of fixations to eyes and 2) an increased fixation time to mouths but only in the emotion-laden conversation. We also examined variables that predicted decreased number of eye fixations and increased mouth-looking in ASD in the emotion-laden conversation. Change scores (to be understood as the degree of visual attention shifting from the mundane to the emotion-laden condition) for the ASD group negatively correlated with age, perceptual reasoning skills, verbal ability, general IQ, theory of mind (ToM) competence, executive function (EF) subscales, and positively correlated with autism severity. Cognitive mechanisms at play and implications for theory and clinical practice are considered

    What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video

    Get PDF
    This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing

    A distributional model of semantic context effects in lexical processinga

    Get PDF
    One of the most robust findings of experimental psycholinguistics is that the context in which a word is presented influences the effort involved in processing that word. We present a novel model of contextual facilitation based on word co-occurrence prob ability distributions, and empirically validate the model through simulation of three representative types of context manipulation: single word priming, multiple-priming and contextual constraint. In our simulations the effects of semantic context are mod eled using general-purpose techniques and representations from multivariate statistics, augmented with simple assumptions reflecting the inherently incremental nature of speech understanding. The contribution of our study is to show that special-purpose m echanisms are not necessary in order to capture the general pattern of the experimental results, and that a range of semantic context effects can be subsumed under the same principled account.â€ș

    Best Practices for Evaluating Flight Deck Interfaces for Transport Category Aircraft with Particular Relevance to Issues of Attention, Awareness, and Understanding CAST SE-210 Output 2 Report 6 of 6

    Get PDF
    Attention, awareness, and understanding of the flight crew are a critical contributor to safety and the flight deck plays a critical role in supporting these cognitive functions. Changes to the flight deck need to be evaluated for whether the changed device provides adequate support for these functions. This report describes a set of diverse evaluation methods. The report recommends designing the interface-evaluation to span the phases of the device development, from early to late, and it provides methods appropriate at each phase. It describes the various ways in which an interface or interface component can fail to support awareness as potential issues to be assessed in evaluation. It summarizes appropriate methods to evaluate different issues concerning inadequate support for these functions, throughout the phases of development

    Attentional Capture of Objects Referred to by Spoken Language

    Get PDF
    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention
    • 

    corecore