62 research outputs found

    An Essay on the Nature of Visual Perception

    Get PDF
    In this dissertation, I address two distinct, but related questions: (1) Is vision encapsulated from higher-level cognitive content? For example, do higher cognitive states like belief and desire alter the contents of vision? (2) What is the scope of visual content? Is the content of vision restricted to “low-level” properties like shape and color or does vision involve a recognitional component? Regarding the first question, I argue that vision is cognitively penetrable, that what we see depends in part on the particularities of our beliefs, expectations, and goals. Regarding the second question, I argue that we visually represent at least some relatively high-level, abstract properties, such as causal interactions, animacy, and facial categories. Both these positions speak to broader issues concerning the epistemic status of our visual capacities. More specifically, we can no longer understand vision as an entirely non-epistemic capacity, one that merely provides us with a structural description of the environment; rather, the visual system carries ontological commitments and by virtue of these commitments it imposes at least a primitive order on what we see

    Visual impressions of active and inanimate resistance to impact from a moving object

    Get PDF
    Images of moving objects presented on computer screens may be perceived as animate or inanimate. A simple hypothesis, consistent with much research evidence, is that objects are perceived as inanimate if there is a visible external contact from another object immediately prior to the onset of motion, and as animate if that is not the case. Evidence is reported that is not consistent with that hypothesis. Objects (targets) moving on contact from another object (launcher) were perceived as actively resisting the impact of the launcher on them if the targets slowed rapidly. Rapid slowing is consistent with the laws of mechanics for objects moving in an environment that offers friction and air resistance. Despite that, ratings of inanimate motion were lower than ratings of active resistance for objects that slowed rapidly. The results are consistent with the hypothesis that there is a perceptual impression of active (animate) resistance that is evoked by the kinematic pattern of rapid slowing from an initial speed after contact from another object. KEYWORDS: Causal perception, launching effect, perceived animac

    How the brain grasps tools: fMRI & motion-capture investigations

    Get PDF
    Humans’ ability to learn about and use tools is considered a defining feature of our species, with most related neuroimaging investigations involving proxy 2D picture viewing tasks. Using a novel tool grasping paradigm across three experiments, participants grasped 3D-printed tools (e.g., a knife) in ways that were considered to be typical (i.e., by the handle) or atypical (i.e., by the blade) for subsequent use. As a control, participants also performed grasps in corresponding directions on a series of 3D-printed non-tool objects, matched for properties including elongation and object size. Project 1 paired a powerful fMRI block-design with visual localiser Region of Interest (ROI) and searchlight Multivoxel Pattern Analysis (MVPA) approaches. Most remarkably, ROI MVPA revealed that hand-selective, but not anatomically overlapping tool-selective, areas of the left Lateral Occipital Temporal Cortex and Intraparietal Sulcus represented the typicality of tool grasping. Searchlight MVPA found similar evidence within left anterior temporal cortex as well as right parietal and temporal areas. Project 2 measured hand kinematics using motion-capture during a highly similar procedure, finding hallmark grip scaling effects despite the unnatural task demands. Further, slower movements were observed when grasping tools, relative to non-tools, with grip scaling also being poorer for atypical tool, compared to non-tool, grasping. Project 3 used a slow-event related fMRI design to investigate whether representations of typicality were detectable during motor planning, but MVPA was largely unsuccessful, presumably due to a lack of statistical power. Taken together, the representations of typicality identified within areas of the ventral and dorsal, but not ventro-dorsal, pathways have implications for specific predictions made by leading theories about the neural regions supporting human tool-use, including dual visual stream theory and the two-action systems model

    Action observation and imitation in the healthy brain and in high-functioning adults with autism spectrum conditions

    Get PDF
    Accurate action perception plays an important role in social interaction enabling us to identify and appropriately respond to the behaviour of others. One such response is automatic imitation, the reflexive copying of observed body movements. Action perception is associated with activity in posterior brain areas, which feed into the Mirror Neuron System (MNS), a network of regions that has been associated with imitation and which is under the regulatory control of frontal brain areas. The fMRI study described in Chapter 2 demonstrated that in healthy adults, action perception can be subdivided into objective and subjective components which are primarily associated with activity in different brain areas. Chapter 3 demonstrated that activity in MNS areas, as measured by MEG, comprises an automatic motoric simulation of the kinematics of observed actions. Chapters 2 and 3 therefore enhance knowledge of the neural mechanisms of action perception in the typical brain. Previous studies have linked Autism Spectrum Conditions (ASC) with action perception and imitation impairments. Chapters 4 and 5 demonstrated that adults with ASC exhibit atypical action perception which is likely due to difficulties with subjective processing (i.e. knowing what a ‘natural’ human movement should look like) rather than with objective visual processing of human motion. Chapter 6 reported a lack of imitation in ASC: whereas typical adults imitated human movements more than robot movements, individuals with ASC failed to imitate. Chapter 7 suggested that problems with imitation in ASC may relate to difficulties with the control of imitation: whereas control participants show increased levels of imitation when in a positive social frame-of-mind individuals with ASC did not. Chapters 4 to 7 have implications for ASC. They suggest that atypical imitation may be due to atypical sensory input to the MNS (i.e. impaired action perception) and/or atypical control of imitation

    The neurocognitive processing of plausibility and real-world knowledge:A cross-linguistic investigation

    Get PDF
    Our knowledge about concepts and meanings is at the very heart of human cognition. In everyday life, we have to interact with our environment in a variety of different ways. Our actions are guided by what we know and believe about the world and this knowledge derives primarily from previous sensory and perceptual experiences. The fact that we are capable of engaging with our environment in an appropriate and efficient way means that we have learnt (how) to make sense of the events and entities we are faced with in day-to-day life. We are thus able to recognise and name both physical objects and abstract concepts, to categorise and associate them based on their specific properties, to interpret other people’s intentions, and to judge cause and effect of their actions as well as our own. Moreover, the ability to represent this wealth of knowledge about the real world in the conceptualised and symbolic form of language is believed to be exclusive to humans. Our language capacity allows us to communicate with others about past and future events or to describe fictitious scenarios by combining previously acquired concepts in a novel way without the need for external stimulation. Thus language forms a primary means of interacting with those around us by allowing us to express our own thoughts and comprehend those of others. As long as language processing proceeds in an undisturbed manner, we are largely unaware of the underlying mechanisms that support the seemingly effortless interpretation of linguistic input. The importance of these processes for successful communication, however, becomes all the more apparent when language processing is disrupted, for example, by brain lesions that render semantic analysis difficult or impossible. Scientific research that aims to uncover and define cognitive or neural mechanisms underlying semantic processing is inevitably faced with the complexity and wealth of semantic relationships that need to be taken into account. In absence of noninvasive neurocognitive methods and insights gleaned from modern neurobiology, early research had a limited impact on our understanding of how semantic processing is implemented in the human brain. Traditional neurological models of language have been based primarily on lesion-deficit data, and thus supported the view that certain areas of the brain were exclusively dedicated to the processing of language-specific functions (Geschwind, 1970; Lichtheim, 1885; Wernicke, 1874). Furthermore, classical theories of sensory processing viewed the brain as a purely stimulus-driven system that retrieves and combines individual low-level aspects or features in an automated, passive and context-independent manner (Biederman, 1987; Burton & Sinclair, 1996; Hubel & Wiesel, 1965; Massaro, 1998). After a recent paradigm shift in the cognitive neurosciences, current theories of sensory processing are now based on the concept of the brain as a highly active, adaptive and dynamic device. In this sense, language comprehension, like many other higher-cognitive functions, is shaped by a flexible interaction of a number of different processes and information sources that include so-called bottom-up signals, i.e., the actual sensory input and processes related to their forward propagation, and top-down processes that generate predictions and expectations based on prior experience and perceived probabilities. Therefore, accounts that view semantic processing as a dynamic and active construction of meaning that is highly sensitive to contextual influences seem most probable from a neurobiological perspective. Results from electrophysiological and neuroimaging research on semantic analysis in sentence and discourse context have provided evidence for top-down influences from the very beginning. In addition, recent ERP results have suggested that the interaction between topdown and bottom-up information is more flexible and dynamic than previously assumed. Yet, the importance of predictions and expectations has long been neglected in models of semantic processing and language comprehension in general. Neuroimaging data have provided us with a long list of brain regions that have been implicated in different aspects of semantic analysis. We are only beginning to understand the role(s) that these regions play and how they interact to support the flexible and efficient construction of meaning. The aim of the present thesis is to gain a more comprehensive view on the computational mechanisms underlying language processing by investigating how bottom- up and top-down information and processes interactively contribute to the semantic analysis in sentences and discourse. To this end, we conducted a total of five studies that used either event-related potentials or functional neuroimaging to shed light on this matter from different perspectives. The thesis is divided into two main parts: Part I (chapters 1-5) provides an overview on previous results from electrophysiology and neuroimaging on semantic processing as well as a description and discussion of the studies conducted in the present thesis. Part II (chapters 6-9) consists of three research articles that describe and discuss the results of five experimental studies. In Part I, Chapter 2 gives a brief introduction to the event-related potential and functional neuroimaging techniques and reviews the most relevant results and theories that have emerged from studies on sentence and discourse processing. Chapter 3 highlights the research questions targeted in each of the experimental studies and describes and discusses the most relevant findings against the background established by Chapter 2. Chapters 4 and 5 conclude Part I by placing the presented results in a broader context and by briefly outlining future directions. Part II begins with a survey of the three studies reported in the subsequent chapters. Chapter 7 highlights the results of the first study, a German ERP experiment that investigated the impact of capitalisation, i.e., a purely form-based and contextually independent bottom-up manipulation, on the processing of semantic anomalies in single sentences. Chapter 8 comprises three ERP experiments that used both easy and hard to detect semantic anomalies in German and English to corroborate the assumption that the weighting of top-down and bottom-up information cues might be determined in a language-specific way. Chapter 9, the final chapter of the thesis, describes and discusses the results of the third study, in which the impact of embedding context on the required depth of semantic processing was examined using functional neuroimaging

    Plasticity and neuromodulation of the extended recurrent visual network

    Get PDF
    The extended visual network, which includes occipital, temporal and parietal posterior cortices, is a system characterized by an intrinsic connectivity consisting of bidirectional projections. This network is composed of feedforward and feedback projections, some hierarchically arranged and others bypassing intermediate areas, allowing direct communication across early and late stages of processing. Notably, the early visual cortex (EVC) receives considerably more feedback and lateral inputs than feedforward thalamic afferents, placing it at the receiving end of a complex cortical processing cascade, rather than just being the entrance stage of cortical processing of retinal input. The critical role of back-projections to visual cortices has been related to perceptual awareness, amplification of neural activity in lower order areas and improvement of stimulus processing. Recently, significant results have shown behavioural evidence suggesting the importance of reentrant projections in the human visual system, and demonstrated the feasibility of inducing their reversible modulation through a transcranial magnetic stimulation (TMS) paradigm named cortico-cortical paired associative stimulation (ccPAS). Here, a novel research line for the study of recurrent connectivity and its plasticity in the perceptual domain was put forward. In the present thesis, we used ccPAS with the aim of empowering the synaptic efficacy, and thus the connectivity, between the nodes of the visuocognitive system to evaluate the impact on behaviour. We focused on driving plasticity in specific networks entailing the elaboration of relevant social features of human faces (Chapters I & II), alongside the investigation of targeted pathways of sensory decisions (Chapter III). This allowed us to characterize perceptual outcomes which endorse the prominent role of the EVC in visual awareness, fulfilled by the activity of back-projections originating from distributed functional nodes

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The invisible body:the neural mechanisms of non-conscious and conscious processing of emotional bodies

    Get PDF
    How do we process emotions expressed by bodies when we don’t realize we are looking at them? This research made body postures invisible for participants by using the “continuous flash suppression” method. It turned out that processing bodily emotions is very different from processing faces, and is different across emotions (e.g. neutral, fearful, angry), both when participants consciously see them and when they see them outside their awareness. The research also looked in detail at the brain activity with the 7T MRI scanner, and found that understanding bodily actions involves a large network across the brain. This research provides insights in the way we understand actions and emotions

    Measuring and Modulating Mimicry: Insights from Virtual Reality and Autism

    Get PDF
    Mimicry involves the unconscious imitation of other people’s behaviour. The social top-down response modulation (STORM) model has suggested that mimicry is a socially strategic behaviour which is modulated according to the social context, for example, we mimic more when someone is looking at us or if we want to affiliate with them. There has been a long debate over whether mimicry is different in autism, a condition characterised by differences in social interaction. STORM predicts that autistic people can and do mimic but do not change their mimicry behaviour according to the social context. Using a range of mimicry measures this thesis aimed to test STORM’s predictions. The first study employed a traditional reaction time measure of mimicry and demonstrated that direct gaze socially modulated mimicry responses in non-autistic adults but did not do so in autistic participants, in line with STORM’s predictions. In the next two studies, I found that non-autistic participants mimicked the movement trajectory of both virtual characters and human actors during an imitation game. Autistic participants also mimicked but did so to a lesser extent. However, this type of mimicry was resistant to the effects of social cues, such as eye-gaze and animacy, contrary to the predictions of STORM. In a fourth study, I manipulated the rationality of an actor’s movement trajectory and found that participants mimicked the trajectory even when the trajectory was rated as irrational. In a fifth study, I showed that people’s tendency to mimic the movements of others could change the choices that participants had previously made in private. This tendency was modulated by the kinematics of the character’s pointing movements. This thesis provides mixed support for STORM’s predictions and I discuss the reasons why this might be. I also make suggestions for how to better measure and modulate mimicry

    Social perception and cognition : processing of gestures, postures and facial expressions in the human brain

    Get PDF
    Humans are a social species with the internal capability to process social information from other humans. To understand others behavior and to react accordingly, it is necessary to infer their internal states, emotions and aims, which are conveyed by subtle nonverbal bodily cues such as postures, gestures, and facial expressions. This thesis investigates the brain functions underlying the processing of such social information. Studies I and II of this thesis explore the neural basis of perceiving pain from another person s facial expressions by means of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). In Study I, observing another s facial expression of pain activated the affective pain system (previously associated with self-experienced pain) in accordance with the intensity of the observed expression. The strength of the response in anterior insula was also linked to the observer s empathic abilities. The cortical processing of facial pain expressions advanced from the visual to temporal-lobe areas at similar latencies (around 300 500 ms) to those previously shown for emotional expressions such as fear or disgust. Study III shows that perceiving a yawning face is associated with middle and posterior STS activity, and the contagiousness of a yawn correlates negatively with amygdalar activity. Study IV explored the brain correlates of interpreting social interaction between two members of the same species, in this case human and canine. Observing interaction engaged brain activity in very similar manner for both species. Moreover, the body and object sensitive brain areas of dog experts differentiated interaction from noninteraction in both humans and dogs whereas in the control subjects, similar differentiation occurred only for humans. Finally, Study V shows the engagement of the brain area associated with biological motion when exposed to the sounds produced by a single human being walking. However, more complex pattern of activation, with the walking sounds of several persons, suggests that as the social situation becomes more complex so does the brain response. Taken together, these studies demonstrate the roles of distinct cortical and subcortical brain regions in the perception and sharing of others internal states via facial and bodily gestures, and the connection of brain responses to behavioral attributes.Ihminen on sosiaalinen laji, ja meillä on myös kanssaihmistemme välittämän sosiaalisen informaation käsittelyyn erikoistuneita aivomekanismeja. Ymmärtääksemme muiden käyttäytymistä ja vastataksemme siihen tarkoituksenmukaisesti, meidän täytyy ymmärtää muiden ihmisten hienovaraisen kehonkielen kuten eleiden tai kasvonilmeiden välittämiä tunnetiloja ja päämääriä. Tässä väitöskirjatyössä tutkittiin tällaisen sosiaalisen informaation käsittelyä aivoissa. Väitöskirja tarkastelee aivotoimintaa toisten ihmisten tunnetilojen havainnoinnissa kasvojen ja kehon eleiden kautta sekä näiden aivovasteiden yhteyttä käyttäytymiseen. Osatöissä I ja II tarkasteltiin toisen ihmisen kipukokemuksen havaitsemista kasvonilmeistä toiminnallisen magneettikuvauksen (fMRI) ja magnetoenkefalografian (MEG) avulla. Tutkimuksissa selvisi, että toisen ihmisen kivun kasvonilmettä katsottaessa ne aivoalueet, jotka osallistuvat myös itse koettuun kipuun, aktivoituivat sitä voimakkaammin, mitä voimakkaampaa kipua kasvonilmeen arveltiin välittävän. Aivoaktivaatio oli myös yhteydessä katselijan empatiakykyihin. Kipuilmeiden käsittely eteni näköaivokuorelta ohimolohkon alueille samassa ajassa kuin on aikaisemmin osoitettu pelon ja inhon ilmeille (noin 300 500 ms). Osatyössä III osoitettiin, että myös haukottelevien kasvojen havaitseminen aktivoi ohimolohkon alueita. Tulokset osoittivat myös, että mitä heikompaa mantelitumakkeen aktivaatio oli havainnon aikana, sitä enemmän koehenkilö tunsi tarvetta haukotella itse katsellessaan haukottelevia kasvoja. Osatyössä IV tutkittiin vuorovaikutuksen havaitsemista kahden ihmisen tai kahden koiran sosiaalisista eleistä. Kummankin lajin vuorovaikutuseleiden katselu aktivoi aivoja samankaltaisesti, mutta koirien elekieleen perehtyneiden asiantuntijoiden aivovasteet kehon ja muiden havaintokohteiden käsittelyyn erikoistuneilla alueilla erottelivat koirien vuorovaikutustilanteet ei-vuorovaikutteisista tilanteista samaan tapaan kuin ihmisten väliset vastaavat tilanteet. Sen sijaan kontrollikoehenkilöiden aivovasteet erottelivat samalla tavalla vain ihmisten vuorovaikutuksen. Osatyössä V osoitettiin, että biologisen liikkeen havaitsemiseen erikoistunut aivoalue aktivoituu yhden ihmisen kävelyääniä kuunnellessa, mutta aktivaatiokuvio leviää kuunneltaessa usean ihmisen kävelyääniä, mikä viittaa aivovasteiden monimutkaistumiseen riippuen sosiaalisesta ympäristöstä
    corecore