18,339 research outputs found
Deficits of knowledge versus executive control in semantic cognition: Insights from cued naming
Deficits of semantic cognition in semantic dementia and in aphasia consequent on CVA (stroke) are qualitatively different. Patients with semantic dementia are characterised by progressive degradation of central semantic representations, whereas multimodal semantic deficits in stroke aphasia reflect impairment of executive processes that help to direct and control semantic activation in a task-appropriate fashion [Jefferies, E., & Lambon Ralph, M. A. (2006). Semantic impairment in stroke aphasia vs. semantic dementia: A case-series comparison. Brain 129, 2132-2147]. We explored interactions between these two aspects of semantic cognition by examining the effects of cumulative phonemic cueing on picture naming in case series of these two types of patient. The stroke aphasic patients with multimodal semantic deficits cued very readily and demonstrated near-perfect name retrieval when cumulative phonemic cues reached or exceeded the target name's uniqueness point. Therefore, knowledge of the picture names was largely intact for the aphasic patients, but they were unable to retrieve this information without cues that helped to direct activation towards the target response. Equivalent phonemic cues engendered significant but much more limited benefit to the semantic dementia patients: their naming was still severely impaired even when most of the word had been provided. In contrast to the pattern in the stroke aphasia group, successful cueing was mainly confined to the more familiar un-named pictures. We propose that this limited cueing effect in semantic dementia follows from the fact that concepts deteriorate in a graded fashion [Rogers, T. T., Lambon Ralph, M. A., Garrard, P., Bozeat, S., McClelland, J. L., & Hodges, J. R., et al. (2004). The structure and deterioration of semantic memory: A neuropsychological and computational investigation. Psychological Review 111, 205-235]. For partially degraded items, the residual conceptual knowledge may be insufficient to drive speech production to completion but these items might reach threshold when they are bolstered by cues. (C) 2007 Elsevier Ltd. All rights reserved
Spontaneous and deliberate future thinking: A dual process account
© 2019 Springer Nature.This is the final published version of an article published in Psychological Research, licensed under a Creative Commons Attri-bution 4.0 International License. Available online at: https://doi.org/10.1007/s00426-019-01262-7.In this article, we address an apparent paradox in the literature on mental time travel and mind-wandering: How is it possible that future thinking is both constructive, yet often experienced as occurring spontaneously? We identify and describe two âroutesâ whereby episodic future thoughts are brought to consciousness, with each of the âroutesâ being associated with separable cognitive processes and functions. Voluntary future thinking relies on controlled, deliberate and slow cognitive processing. The other, termed involuntary or spontaneous future thinking, relies on automatic processes that allows âfully-fledgedâ episodic future thoughts to freely come to mind, often triggered by internal or external cues. To unravel the paradox, we propose that the majority of spontaneous future thoughts are âpre-madeâ (i.e., each spontaneous future thought is a re-iteration of a previously constructed future event), and therefore based on simple, well-understood, memory processes. We also propose that the pre-made hypothesis explains why spontaneous future thoughts occur rapidly, are similar to involuntary memories, and predominantly about upcoming tasks and goals. We also raise the possibility that spontaneous future thinking is the default mode of imagining the future. This dual process approach complements and extends standard theoretical approaches that emphasise constructive simulation, and outlines novel opportunities for researchers examining voluntary and spontaneous forms of future thinking.Peer reviewe
Combining relevance information in a synchronous collaborative information retrieval environment
Traditionally information retrieval (IR) research has focussed on a single user interaction modality, where a user searches to satisfy an information need. Recent
advances in both web technologies, such as the sociable web of Web 2.0, and computer hardware, such as tabletop interface devices, have enabled multiple users to collaborate on many computer-related tasks. Due to these advances there is an increasing need to support
two or more users searching together at the same time, in order to satisfy a shared information need, which we refer to as Synchronous Collaborative Information Retrieval.
Synchronous Collaborative Information Retrieval (SCIR) represents a significant paradigmatic shift from traditional IR systems. In order to support an effective SCIR search, new techniques are required to coordinate users' activities. In this chapter we explore the effectiveness of a sharing of knowledge policy on a collaborating group. Sharing of knowledge refers to the process of passing relevance information across users,
if one user finds items of relevance to the search task then the group should benefit in the form of improved ranked lists returned to each searcher.
In order to evaluate the proposed techniques we simulate two users searching together through an incremental feedback system. The simulation assumes that users decide on an initial query with which to begin the collaborative search and proceed through the search by providing relevance judgments to the system and receiving a new ranked list. In order to populate these simulations we extract data from the interaction logs of various
experimental IR systems from previous Text REtrieval Conference (TREC) workshops
Recommended from our members
The use and function of gestures in word-finding difficulties in aphasia
Background: Gestures are spontaneous hand and arm movements that are part of everyday communication. The roles of gestures in communication are disputed. Most agree that they augment the information conveyed in speech. More contentiously, some argue that they facilitate speech, particularly when word-finding difficulties (WFD) occur. Exploring gestures in aphasia may further illuminate their role.
Aims: This study explored the spontaneous use of gestures in the conversation of participants with aphasia (PWA) and neurologically healthy participants (NHP). It aimed to examine the facilitative role of gesture by determining whether gestures particularly accompanied WFD and whether those difficulties were resolved.
Methods & Procedures: Spontaneous conversation data were collected from 20 PWA and 21 NHP. Video samples were analysed for gesture production, speech production, and WFD. Analysis 1 examined whether the production of semantically rich gestures in these conversations was affected by whether the person had aphasia, and/or whether there were difficulties in the accompanying speech. Analysis 2 identified all WFD in the data and examined whether these were more likely to be resolved if accompanied by a gesture, again for both groups of participants.
Outcomes & Results: Semantically rich gestures were frequently employed by both groups of participants, but with no effect of group. There was an effect of the accompanying speech, with gestures occurring most commonly alongside resolved WFD. An interaction showed that this was particularly the case for PWA. NHP, on the other hand, employed semantically rich gestures most frequently alongside fluent speech. Analysis 2 showed that WFD were common in both groups of participants. Unsurprisingly, these were more likely to be resolved for NHP than PWA. For both groups, resolution was more likely if a WFD was accompanied by a gesture.
Conclusions: These findings shed light on the different functions of gesture within conversation. They highlight the importance of gesture during WFD, both in aphasic and neurologically healthy language, and suggest that gesture may facilitate word retrieval
Multimedia search without visual analysis: the value of linguistic and contextual information
This paper addresses the focus of this special issue by analyzing the potential contribution of linguistic content and other non-image aspects to the processing of audiovisual data. It summarizes the various ways in which linguistic content analysis contributes to enhancing the semantic annotation of multimedia content, and, as a consequence, to improving the effectiveness of conceptual media access tools. A number of techniques are presented, including the time-alignment of textual resources, audio and speech processing, content reduction and reasoning tools, and the exploitation of surface features
Recommended from our members
What can co-speech gestures in aphasia tell us about the relationship between language and gesture?: A single case study of a participant with Conduction Aphasia
Cross-linguistic evidence suggests that language typology influences how people gesture when using âmanner-of-motionâ verbs (Kita 2000; Kita & ĂzyĂŒrek 2003) and that this is due to âonlineâ lexical and syntactic choices made at the time of speaking (Kita, ĂzyĂŒrek, Allen, Brown, Furman & Ishizuka, 2007). This paper attempts to relate these findings to the co-speech iconic gesture used by an English speaker with conduction aphasia (LT) and five controls describing a Sylvester and Tweety1 cartoon. LT produced co-speech gesture which showed distinct patterns which we relate to different aspects of her language impairment, and the lexical and syntactic choices she made during her narrative
TwNC: a Multifaceted Dutch News Corpus
This contribution describes the Twente News Corpus (TwNC), a multifaceted corpus for Dutch that is being deployed in a number of NLP research projects among which tracks within the Dutch national research programme MultimediaN, the NWO programme CATCH, and the Dutch-Flemish programme STEVIN.\ud
\ud
The development of the corpus started in 1998 within a predecessor project DRUID and has currently a size of 530M words. The text part has been built from texts of four different sources: Dutch national newspapers, television subtitles, teleprompter (auto-cues) files, and both manually and automatically generated broadcast news transcripts along with the broadcast news audio. TwNC plays a crucial role in the development and evaluation of a wide range of tools and applications for the domain of multimedia indexing, such as large vocabulary speech recognition, cross-media indexing, cross-language information retrieval etc. Part of the corpus was fed into the Dutch written text corpus in the context of the Dutch-Belgian STEVIN project D-COI that was completed in 2007. The sections below will describe the rationale that was the starting point for the corpus development; it will outline the cross-media linking approach adopted within MultimediaN, and finally provide some facts and figures about the corpus
- âŠ