10,171 research outputs found

    Neural Dynamics of Motion Processing and Speed Discrimination

    Full text link
    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-tuned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the Vl→7 MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-94-1-0597, N00014-95-1-0409); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI-90-00530

    Precis of neuroconstructivism: how the brain constructs cognition

    Get PDF
    Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment

    It wasn't me! Motor activation from irrelevant spatial information in the absence of a response

    Get PDF
    Embodied cognition postulates that perceptual and motor processes serve higher-order cognitive faculties like language. A major challenge for embodied cognition concerns the grounding of abstract concepts. Here we zoom in on abstract spatial concepts and ask the question to what extent the sensorimotor system is involved in processing these. Most of the empirical support in favor of an embodied perspective on (abstract) spatial information has derived from so-called compatibility effects in which a task-irrelevant feature either facilitates (for compatible trials) or hinders (in incompatible trials) responding to the task-relevant feature. This type of effect has been interpreted in terms of (task-irrelevant) feature-induced response activation. The problem with such approach is that incompatible features generate an array of task relevant and irrelevant activations [e.g., in primary motor cortex (M1)], and lateral hemispheric interactions render it difficult to assign credit to the task-irrelevant feature per se in driving these activations. Here, we aim to obtain a cleaner indication of response activation on the basis of abstract spatial information. We employed transcranial magnetic stimulation (TMS) to probe response activation of effectors in response to semantic, task-irrelevant stimuli (i.e., the words left and right) that did not require an overt response. Results revealed larger motor evoked potentials (MEPs) for the right (left) index finger when the word right (left) was presented. Our findings provide support for the grounding of abstract spatial concepts in the sensorimotor system

    Developmental disorders of vision

    Get PDF
    This review of developmental disorders of vision focuses on a few of the many disorders that disrupt visual development. Given the enormity of the human visual system in the primate brain and complexity of visual development, however, there are likely hundreds or thousands of potential types of disorders affecting high-level vision. The rapid progress seen in developmental dyslexia and Williams syndrome demonstrates the possibilities and difficulties inherent in researching such disorders, and the authors hope that similar progress will be made for congenital prosopagnosia and other disorders in the near future

    Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction

    Full text link
    A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530

    Inside the brain of an elite athlete: The neural processes that support high achievement in sports

    Get PDF
    Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance

    Brain-inspired conscious computing architecture

    Get PDF
    What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon’s claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852
    corecore