4,951 research outputs found

    A ratio model of perceived speed in the human visual system

    Get PDF
    The perceived speed of moving images changes over time. Prolonged viewing of a pattern (adaptation) leads to an exponential decrease in its perceived speed. Similarly, responses of neurones tuned to motion reduce exponentially over time. It is tempting to link these phenomena. However, under certain conditions, perceived speed increases after adaptation and the time course of these perceptual effects varies widely. We propose a model that comprises two temporally tuned mechanisms whose sensitivities reduce exponentially over time. Perceived speed is taken as the ratio of these filters' outputs. The model captures increases and decreases in perceived speed following adaptation and describes our data well with just four free parameters. Whilst the model captures perceptual time courses that vary widely, parameter estimates for the time constants of the underlying filters are in good agreement with estimates of the time course of adaptation of direction selective neurones in the mammalian visual system

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    Recommendation domains for pond aquaculture

    Get PDF
    This publication introduces the methods and results of a research project that has developed a set of decision-support tools to identify places and sets of conditions for which a particular target aquaculture technology is considered feasible and therefore good to promote. The tools also identify the nature of constraints to aquaculture development and thereby shed light on appropriate interventions to realize the potential of the target areas. The project results will be useful for policy planners and decision makers in national, regional and local governments and development funding agencies, aquaculture extension workers in regional and local governments, and researchers in aquaculture systems and rural livelihoods. (Document contains 40 pages

    How Haptic Size Sensations Improve Distance Perception

    Get PDF
    Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information.National Institutes of Health (U.S.) (NIH grant R01EY015261)University of Minnesota (UMN Graduate School Fellowship)National Science Foundation (U.S.) (Graduate Research Fellowship)University of Minnesota (UMN Doctoral Dissertation Fellowship)National Institutes of Health (U.S.) (NIH NRSA grant F32EY019228-02)Ruth L. Kirschstein National Research Service Awar

    Recommendation domains for pond aquaculture

    Get PDF
    This publication introduces the methods and results of a research project that has developed a set of decision-support tools to identify places and sets of conditions for which a particular target aquaculture technology is considered feasible and therefore good to promote. The tools also identify the nature of constraints to aquaculture development and thereby shed light on appropriate interventions to realize the potential of the target areas. The project results will be useful for policy planners and decision makers in national, regional and local governments and development funding agencies, aquaculture extension workers in regional and local governments, and researchers in aquaculture systems and rural livelihoods.Pond culture, Freshwater aquaculture, GIS

    The scene superiority effect: object recognition in the context of natural scenes

    Get PDF
    Four experiments investigate the effect of background scene semantics on object recognition. Although past research has found that semantically consistent scene backgrounds can facilitate recognition of a target object, these claims have been challenged as the result of post-perceptual response bias rather than the perceptual processes of object recognition itself. The current study takes advantage of a paradigm from linguistic processing known as the Word Superiority Effect. Humans can better discriminate letters (e.g., D vs. K) in the context of a word (WORD vs. WORK) than in a non-word context (e.g., WROD vs. WROK) even when the context is non-predictive of the target identity. We apply this paradigm to objects in natural scenes, having subjects discriminate between objects in the context of scenes. Because the target objects were equally semantically consistent with any given scene and could appear in either semantically consistent or inconsistent contexts with equal probability, response bias could not lead to an apparent improvement in object recognition. The current study found a benefit to object recognition from semantically consistent backgrounds, and the effect appeared to be modulated by awareness of background scene semantics
    corecore