2,315 research outputs found

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    The shape of motion perception: Global pooling of transformational apparent motion

    Get PDF
    Transformational apparent motion (TAM) is a visual phenomenon highlighting the utility of form information in motion processing. In TAM, smooth apparent motion is perceived when shapes in certain spatiotemporal arrangements change. It has been argued tha

    Activity-dependence of synaptic vesicle dynamics

    Get PDF
    The proper function of synapses relies on efficient recycling of synaptic vesicles. The small size of synaptic boutons has hampered efforts to define the dynamical states of vesicles during recycling. Moreover, whether vesicle motion during recycling is regulated by neural activity remains largely unknown. We combined nanoscale-resolution tracking of individual synaptic vesicles in cultured hippocampal neurons from rats of both sexes with advanced motion analyses to demonstrate that the majority of recently endocytosed vesicles undergo sequences of transient dynamical states including epochs of directed, diffusional, and stalled motion. We observed that vesicle motion is modulated in an activity-dependent manner, with dynamical changes apparent in ∼20% of observed boutons. Within this subpopulation of boutons, 35% of observed vesicles exhibited acceleration and 65% exhibited deceleration, accompanied by corresponding changes in directed motion. Individual vesicles observed in the remaining ∼80% of boutons did not exhibit apparent dynamical changes in response to stimulation. More quantitative transient motion analyses revealed that the overall reduction of vesicle mobility, and specifically of the directed motion component, is the predominant activity-evoked change across the entire bouton population. Activity-dependent modulation of vesicle mobility may represent an important mechanism controlling vesicle availability and neurotransmitter release.SIGNIFICANCE STATEMENTMechanisms governing synaptic vesicle dynamics during recycling remain poorly understood. Using nanoscale resolution tracking of individual synaptic vesicles in hippocampal synapses and advanced motion analysis tools we demonstrate that synaptic vesicles undergo complex sets of dynamical states that include epochs of directed, diffusive, and stalled motion. Most importantly, our analyses revealed that vesicle motion is modulated in an activity-dependent manner apparent as the reduction in overall vesicle mobility in response to stimulation. These results define the vesicle dynamical states during recycling and reveal their activity-dependent modulation. Our study thus provides fundamental new insights into the principles governing synaptic function

    Explaining the Ontological Emergence of Consciousness

    Get PDF
    Ontological emergentists about consciousness maintain that phenomenal properties are ontologically fundamental properties that are nonetheless non-basic: they emerge from reality only once the ultimate material constituents of reality (the “UPCs”) are suitable arranged. Ontological emergentism has been challenged on the grounds that it is insufficiently explanatory. In this essay, I develop the version of ontological emergentism I take to be the most explanatorily promising—the causal theory of ontological emergence—in light of four challenges: The Collaboration Problem (how do UPCs jointly manifest their collective consciousness-generating power?); The Threshold Problem: (under what circumstances do UPCs jointly manifest their collective consciousness-generating power?); The Subject Problem (which object is the bearer of emergent phenomenal states?); and The Specificity Problem (what determines which specific phenomenal state is generated?) In response to these challenges, I arrive at the following picture of ontological emergence. When UPCs that are parts of a suitably complex sensorimotor system become entangled, they jointly manifest a subject-forming power (where subjects are deeply unified composites of the UPCs responsible for generating them). The emergent subjects thereby formed exhibit a novel causal power: the power to generate phenomenal states, which they themselves instantiate: states that “interpret” what is going on in the brain

    Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction

    Full text link
    A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530

    The role of the posterior parietal cortex in cognitive-motor integration

    Get PDF
    "When interacting with an object within the environment, one must combine visual information with the felt limb position (i.e. proprioception) in order compute an appropriate coordinated muscle plan for accurate motor control. Amongst the vast reciprocally connected parieto-frontal connections responsible for guiding a limb throughout space, the posterior parietal cortex (PPC) remains a front-runner as a crucial node within this network. Our brain is primed to reach directly towards a viewed object, a situation that has been termed ""standard"". Such direct eye-hand coordination is common across species and is crucial for basic survival. Humans, however, have developed the capacity for tool-use and thus have learned to interact indirectly with an object. In such ""non-standard"" situations, the directions of gaze and arm movement are spatially decoupled and rely on both the implementation of a cognitive rule and online feedback of the decoupled limb. The studies included within this dissertation were designed to further characterize the role of the PPC in different types of visually-guided reaching which require one to think and to act simultaneously (i.e. cognitive-motor integration). To address the relative contribution of different cortical networks responsible for cognitive-motor integration, we tested three patients with optic ataxia (OA; two unilateral - first study, and one bilateral -second study) as well as healthy participants during a cognitively-demanding dual task (third study) on a series of visually-guided reaching tasks each requiring a relative weighting between explicit cognitive control and implicit online control of the spatially decoupled limb. We found that the eye and hand movement performance during decoupled reaching was the most compromised in OA during situations relying on sensorimotor recalibration, and the most compromised in healthy participants during a dual task relying on strategic control. Taken together, these data presented in this dissertation provide further evidence for the existence of alternate task-dependent neural pathways for cognitive-motor integration.

    Agnosic vision is like peripheral vision, which is limited by crowding

    Get PDF
    Abstract Visual agnosia is a neuropsychological impairment of visual object recognition despite near-normal acuity and visual fields. A century of research has provided only a rudimentary account of the functional damage underlying this deficit. We find that the object-recognition ability of agnosic patients viewing an object directly is like that of normally-sighted observers viewing it indirectly, with peripheral vision. Thus, agnosic vision is like peripheral vision. We obtained 14 visual-object-recognition tests that are commonly used for diagnosis of visual agnosia. Our "standard" normal observer took these tests at various eccentricities in his periphery. Analyzing the published data of 32 apperceptive agnosia patients and a group of 14 posterior cortical atrophy (PCA) patients on these tests, we find that each patient's pattern of object recognition deficits is well characterized by one number, the equivalent eccentricity at which our standard observer's peripheral vision is like the central vision of the agnosic patient. In other words, each agnosic patient's equivalent eccentricity is conserved across tests. Across patients, equivalent eccentricity ranges from 4 to 40 deg, which rates severity of the visual deficit. In normal peripheral vision, the required size to perceive a simple image (e.g., an isolated letter) is limited by acuity, and that for a complex image (e.g., a face or a word) is limited by crowding. In crowding, adjacent simple objects appear unrecognizably jumbled unless their spacing exceeds the crowding distance, which grows linearly with eccentricity. Besides conservation of equivalent eccentricity across object-recognition tests, we also find conservation, from eccentricity to agnosia, of the relative susceptibility of recognition of ten visual tests. These findings show that agnosic vision is like eccentric vision. Whence crowding? Peripheral vision, strabismic amblyopia, and possibly apperceptive agnosia are all limited by crowding, making it urgent to know what drives crowding. Acuity does not (Song et al., 2014), but neural density might: neurons per deg2 in the crowding-relevant cortical area

    Annotated Bibliography: Anticipation

    Get PDF
    corecore