419 research outputs found

    Visual features drive the category-specific impairments on categorization tasks in a patient with object agnosia

    Get PDF
    Object and scene recognition both require mapping of incoming sensory information to existing conceptual knowledge about the world. A notable finding in brain-damaged patients is that they may show differentially impaired performance for specific categories, such as for “living exemplars”. While numerous patients with category-specific impairments have been reported, the explanations for these deficits remain controversial. In the current study, we investigate the ability of a brain injured patient with a well-established category-specific impairment of semantic memory to perform two categorization experiments: ‘natural’ vs. ‘manmade’ scenes (experiment 1) and objects (experiment 2). Our findings show that the pattern of categorical impairment does not respect the natural versus manmade distinction. This suggests that the impairments may be better explained by differences in visual features, rather than by category membership. Using Deep Convolutional Neural Networks (DCNNs) as ‘artificial animal models’ we further explored this idea. Results indicated that DCNNs with ‘lesions’ in higher order layers showed similar response patterns, with decreased relative performance for manmade scenes (experiment 1) and natural objects (experiment 2), even though they have no semantic category knowledge, apart from a mapping between pictures and labels. Collectively, these results suggest that the direction of category-effects to a large extent depends, at least in MSâ€Č case, on the degree of perceptual differentiation called for, and not semantic knowledge

    Indoor Outdoor Scene Classification in Digital Images

    Get PDF
    In this paper, we present a method to classify real-world digital images into indoor and outdoor scenes. Indoor class consists of four groups: bedroom, kitchen, laboratory and library. Outdoor class consists of four groups: landscape, roads, buildings and garden. Application considers real-time system and has a dedicated data-set. Input images are pre-processed and converted into gray-scale and is re-sized to “128x128” dimensions. Pre-processed images are sent to “Gabor filters”, which pre-computes filter transfer functions, which are performed on Fourier domain. The processed signal is finally sent to GIST feature extraction and the images are classified using “kNN classifier”. Most of the techniques have been based on the use of texture and color space features. As of date, we have been able to achieve 80% accuracy with respect to image classification

    Is the dolphin a fish? ERP evidence for the impact of typicality during early visual processing in ultra-rapid semantic categorization in autism spectrum disorder

    Get PDF
    Funding Open Access funding enabled and organized by Projekt DEAL. JCC was fnanced by national funding through FCT — Fundação para a CiĂȘncia e a Tec nologia and I. P. on the scope of Norma TransitĂłria DL57/2016/CP1439/CT02 and through the Research Center for Psychological Science of the Faculty of Psychology, University of Lisbon (UIDB/04527/2020; UIDP/04527/2020). A-KB was supported by the Rhineland-Palatinate Research Initiative (Potentialbere ich Cognitive Science) of the Federal Ministry of Science, Further Education and Culture (MWWK).BACKGROUND: Neurotypical individuals categorize items even during ultra-rapid presentations (20 ms; see Thorpe et al. Nature 381: 520, 1996). In cognitively able autistic adults, these semantic categorization processes may be impaired and/or may require additional time, specifically for the categorization of atypical compared to typical items. Here, we investigated how typicality structures influence ultra-rapid categorization in cognitively able autistic and neurotypical male adults. METHODS: Images representing typical or atypical exemplars of two different categories (food/animals) were presented for 23.5 vs. 82.3 ms (short/long). We analyzed detection rates, reaction times, and the event-related potential components dN150, N1, P2, N2, and P3 for each group. RESULTS: Behavioral results suggest slower and less correct responses to atypical compared to typical images. This typicality effect was larger for the category with less distinct boundaries (food) and observed in both groups. However, electrophysiological data indicate a different time course of typicality effects, suggesting that neurotypical adults categorize atypical images based on simple features (P2), whereas cognitively able autistic adults categorize later, based on arbitrary features of atypical images (P3). CONCLUSIONS: We found evidence that all three factors under investigation - category, typicality, and presentation time - modulated specific aspects of semantic categorization. Additionally, we observed a qualitatively different pattern in the autistic adults, which suggests that they relied on different cognitive processes to complete the task.publishersversionpublishe

    Mind in Action: Action Representation and the Perception of Biological Motion

    Get PDF
    The ability to understand and communicate about the actions of others is a fundamental aspect of our daily activity. How can we talk about what others are doing? What qualities do different actions have such that they cause us to see them as being different or similar? What is the connection between what we see and the development of concepts and words or expressions for the things that we see? To what extent can two different people see and talk about the same things? Is there a common basis for our perception, and is there then a common basis for the concepts we form and the way in which the concepts become lexicalized in language? The broad purpose of this thesis is to relate aspects of perception, categorization and language to action recognition and conceptualization. This is achieved by empirically demonstrating a prototype structure for action categories and by revealing the effect this structure has on language via the semantic organization of verbs for natural actions. The results also show that implicit access to categorical information can affect the perceptual processing of basic actions. These findings indicate that our understanding of human actions is guided by the activation of high level information in the form of dynamic action templates or prototypes. More specifically, the first two empirical studies investigate the relation between perception and the hierarchical structure of action categories, i.e., subordinate, basic, and superordinate level action categories. Subjects generated lists of verbs based on perceptual criteria. Analyses based on multidimensional scaling showed a significant correlation for the semantic organization of a subset of the verbs for English and Swedish speaking subjects. Two additional experiments were performed in order to further determine the extent to which action categories exhibit graded structure, which would indicate the existence of prototypes for action categories. The results from typicality ratings and category verification showed that typicality judgments reliably predict category verification times for instances of different actions. Finally, the results from a repetition (short-term) priming paradigm suggest that high level information about the categorical differences between actions can be implicitly activated and facilitates the later visual processing of displays of biological motion. This facilitation occurs for upright displays, but appears to be lacking for displays that are shown upside down. These results show that the implicit activation of information about action categories can play a critical role in the perception of human actions

    The Time-Course of Visual Categorizations: You Spot the Animal Faster than the Bird

    Get PDF
    Background: Since the pioneering study by Rosch and colleagues in the 70s, it is commonly agreed that basic level perceptual categories (dog, chair
) are accessed faster than superordinate ones (animal, furniture
). Nevertheless, the speed at which objects presented in natural images can be processed in a rapid go/no-go visual superordinate categorization task has challenged this ‘‘basic level advantage’’. Principal Findings: Using the same task, we compared human processing speed when categorizing natural scenes as containing either an animal (superordinate level), or a specific animal (bird or dog, basic level). Human subjects require an additional 40–65 ms to decide whether an animal is a bird or a dog and most errors are induced by non-target animals. Indeed, processing time is tightly linked with the type of non-targets objects. Without any exemplar of the sam

    Object processing in the medial temporal lobe: Influence of object domain

    Get PDF
    We live in a rich visual world, surrounded by many different kinds of objects. While we may not often reflect on it, our ability to recognize what an object is, detect whether an object is familiar or novel, and bring to mind our general knowledge about an object, are all essential components of adaptive behavior. In this dissertation, I investigate the neural basis of object representations, focusing on medial temporal lobe (MTL) structures, namely, perirhinal cortex, parahippocampal cortex, and hippocampus. I use what type of thing an object is, or more specifically, the broader category (e.g., “face” or “house”) or domain (e.g., “animate or “inanimate”) to which an object belongs to probe MTL structures. In the Chapter 2, I used fMRI to explore whether object representations in MTL structures were organized by animacy, and/or real-world size. I found domain-level organization in all three MTL structures, with a distinct pattern of domain organization in each structure. In Chapter 3, I examined whether recognition-memory signals for objects were organized by category and domain in the same MTL structures. I found no evidence of category or domain specificity in recognition memory-signals, but did reveal a distinction between novel and familiar object representations across all categories. Finally, in Chapter 4, I used a neuropsychological approach to discover a unique contribution of the hippocampus to object concepts. I found that an individual with developmental amnesia had normal intrinsic feature knowledge, but less extrinsic, or associative feature knowledge of concepts This decreased extrinsic feature knowledge led to abnormalities specific to non-living object concepts. These results show that the hippocampus may play an important role in the development of object concepts, potentially through the same relational binding mechanism that links objects and context in episodic memory. Taken together, these findings suggest that using object category or domain to probe the function of MTL structures is a useful approach for gaining a deeper understanding of the similarities and differences between MTL structures, and how they contribute more broadly to our perception and memory of the world

    Mapping Acoustic and Semantic Dimensions of Auditory Perception

    Get PDF
    Auditory categorisation is a function of sensory perception which allows humans to generalise across many different sounds present in the environment and classify them into behaviourally relevant categories. These categories cover not only the variance of acoustic properties of the signal but also a wide variety of sound sources. However, it is unclear to what extent the acoustic structure of sound is associated with, and conveys, different facets of semantic category information. Whether people use such data and what drives their decisions when both acoustic and semantic information about the sound is available, also remains unknown. To answer these questions, we used the existing methods broadly practised in linguistics, acoustics and cognitive science, and bridged these domains by delineating their shared space. Firstly, we took a model-free exploratory approach to examine the underlying structure and inherent patterns in our dataset. To this end, we ran principal components, clustering and multidimensional scaling analyses. At the same time, we drew sound labels’ semantic space topography based on corpus-based word embeddings vectors. We then built an LDA model predicting class membership and compared the model-free approach and model predictions with the actual taxonomy. Finally, by conducting a series of web-based behavioural experiments, we investigated whether acoustic and semantic topographies relate to perceptual judgements. This analysis pipeline showed that natural sound categories could be successfully predicted based on the acoustic information alone and that perception of natural sound categories has some acoustic grounding. Results from our studies help to recognise the role of physical sound characteristics and their meaning in the process of sound perception and give an invaluable insight into the mechanisms governing the machine-based and human classifications
    • 

    corecore