3,407 research outputs found

    The fast contribution of visual-proprioceptive discrepancy to reach aftereffects and proprioceptive recalibration

    Get PDF
    Adapting reaches to altered visual feedback not only leads to motor changes, but also to shifts in perceived hand location; “proprioceptive recalibration”. These changes are robust to many task variations and can occur quite rapidly. For instance, our previous study found both motor and sensory shifts arise in as few as 6 rotated-cursor training trials. The aim of this study is to investigate one of the training signals that contribute to these rapid sensory and motor changes. We do this by removing the visuomotor error signals associated with classic visuomotor rotation training; and provide only experience with a visual-proprioceptive discrepancy for training. While a force channel constrains reach direction 30o away from the target, the cursor representing the hand unerringly moves straight to the target. The resulting visual-proprioceptive discrepancy drives significant and rapid changes in no-cursor reaches and felt hand position, again within only 6 training trials. The extent of the sensory change is unexpectedly larger following the visual-proprioceptive discrepancy training. Not surprisingly the size of the reach aftereffects is substantially smaller than following classic visuomotor rotation training. However, the time course by which both changes emerge is similar in the two training types. These results suggest that even the mere exposure to a discrepancy between felt and seen hand location is a sufficient training signal to drive robust motor and sensory plasticity.York University Librarie

    Visual attention in the real world

    Get PDF
    Humans typically direct their gaze and attention at locations important for the tasks they are engaged in. By measuring the direction of gaze, the relative importance of each location can be estimated which can reveal how cognitive processes choose where gaze is to be directed. For decades, this has been done in laboratory setups, which have the advantage of being well-controlled. Here, visual attention is studied in more life-like situations, which allows testing ecological validity of laboratory results and allows the use of real-life setups that are hard to mimic in a laboratory. All four studies in this thesis contribute to our understanding of visual attention and perception in more complex situations than are found in the traditional laboratory experiments. Bottom-up models of attention use the visual input to predict attention or even the direction of gaze. In such models the input image is analyzed for each of several features first. In the classic Saliency Map model, these features are color contrast, luminance contrast and orientation contrast. The “interestingness” of each location in the image is represented in a ‘conspicuity maps’, one for each feature. The Saliency Map model then combines these conspicuity maps by linear addition, and this additivity has recently been challenged. The alternative is to use the maxima across all conspicuity maps. In the first study, the features color contrast and luminance contrast were manipulated in photographs of natural scenes to test which of these mechanisms is the best predictor of human behavior. It was shown that a linear addition, as in the original model, matches human behavior best. As all the assumptions of the Saliency Map model on the processes preceding the linear addition of the conspicuity maps are based on physiological research, this result constrains future models in their mechanistic assumption. If models of visual attention are to have ecological validity, comparing visual attention in laboratory and real-world conditions is necessary, and this is done in the second study. In the first condition, eye movements and head-centered, first-person perspective movies were recorded while participants explored 15 real-world environments (“free exploration”). Clips from these movies were shown to participants in two laboratory tasks. First, the movies were replayed as they were recorded (“video replay”), and second, a shuffled selection of frames was shown for 1 second each (“1s frame replay”). Eye-movement recordings from all three conditions revealed that in comparison to 1s frame replay, the video replay condition was qualitatively more alike to the free exploration condition with respect to the distribution of gaze and the relationship between gaze and model saliency and was quantitatively better able to predict free exploration gaze. Furthermore, the onset of a new frame in 1s frame replay evoked a reorientation of gaze towards the center. That is, the event of presenting a stimulus in a laboratory setup affects attention in a way unlikely to occur in real life. In conclusion, video replay is a better model for real-world visual input. The hypothesis that walking on more irregular terrain requires visual attention to be directed at the path more was tested on a local street (“Hirschberg”) in the third study. Participants walked on both sides of this inclined street; a cobbled road and the immediately adjacent, irregular steps. The environment and instructions were kept constant. Gaze was directed at the path more when participants walked on the steps as compared to the road. This was accomplished by pointing both the head and the eyes lower on the steps than on the road, while only eye-in-head orientation was spread out along the vertical more on the steps, indicating more or large eye movements on the more irregular steps. These results confirm earlier findings that eye and head movements play distinct roles in directing gaze in real-world situations. Furthermore, they show that implicit tasks (not falling, in this case) affect visual attention as much as explicit tasks do. In the last study it is asked if actions affect perception. An ambiguous stimulus that is alternatively perceived as rotating clockwise or counterclockwise (the ‘percept’) was used. When participants had to rotate a manipulandum continuously in a pre-defined direction – either clockwise or counterclockwise – and reported their concurrent percept with a keyboard, percepts weren’t affected by movements. If participants had to use the manipulandum to indicate their percept – by rotating either congruently or incongruently with the percept – the movements did affect perception. This shows that ambiguity in visual input is resolved by relying on motor signals, but only when they are relevant for the task at hand. Either by using natural stimuli, by comparing behavior in the laboratory with behavior in the real world, by performing an experiment on the street, or by testing how two diverse but everyday sources of information are integrated, the faculty of vision was studied in more life like situations. The validity of some laboratory work has been examined and confirmed and some first steps in doing experiments in real-world situations have been made. Both seem to be promising approaches for future research

    Pupil Dilation Signals Surprise: Evidence for Noradrenaline’s Role in Decision Making

    Get PDF
    Our decisions are guided by the rewards we expect. These expectations are often based on incomplete knowledge and are thus subject to uncertainty. While the neurophysiology of expected rewards is well understood, less is known about the physiology of uncertainty. We hypothesize that uncertainty, or more specifically errors in judging uncertainty, are reflected in pupil dilation, a marker that has frequently been associated with decision making, but so far has remained largely elusive to quantitative models. To test this hypothesis, we measure pupil dilation while observers perform an auditory gambling task. This task dissociates two key decision variables – uncertainty and reward – and their errors from each other and from the act of the decision itself. We first demonstrate that the pupil does not signal expected reward or uncertainty per se, but instead signals surprise, that is, errors in judging uncertainty. While this general finding is independent of the precise quantification of these decision variables, we then analyze this effect with respect to a specific mathematical model of uncertainty and surprise, namely risk and risk prediction error. Using this quantification, we find that pupil dilation and risk prediction error are indeed highly correlated. Under the assumption of a tight link between noradrenaline (NA) and pupil size under constant illumination, our data may be interpreted as empirical evidence for the hypothesis that NA plays a similar role for uncertainty as dopamine does for reward, namely the encoding of error signals

    Gaze in Visual Search Is Guided More Efficiently by Positive Cues than by Negative Cues

    Get PDF
    Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e.,negative cues) can also guide search. In the present study, we examine whether negative cues lead to different search behavior compared to positive cues. We asked observers to search for a target defined by a certain shape singleton (broken line among solid lines). Each line was embedded in a colored disk. In "positive cue" blocks, participants were informed about possible colors of the target item. In "negative cue" blocks, the participants were informed about colors that could not contain the target. Search displays were designed such that with both the positive and negative cues, the same number of items could potentially contain the broken line ("relevant items"). Thus, both cues were equally informative. We measured response times and eye movements. Participants exhibited longer response times when provided with negative cues compared to positive cues. Although negative cues did guide the eyes to relevant items, there were marked differences in eye movements. Negative cues resulted in smaller proportions of fixations on relevant items, longer duration of fixations and in higher rates of fixations per item as compared to positive cues. The effectiveness of both cue types, as measured by fixations on relevant items, increased over the course of each search. In sum, a negative color cue can guide attention to relevant items, but it is less efficient than a positive cue of the same informational value

    Faces in Places: Humans and Machines Make Similar Face Detection Errors

    Get PDF
    The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications (“Viola-Jones” algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections (“real faces”), false positives (“illusory faces”) and correctly rejected locations (“non faces”). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system

    Full nonperturbative QCD simulations with 2+1 flavors of improved staggered quarks

    Full text link
    Dramatic progress has been made over the last decade in the numerical study of quantum chromodynamics (QCD) through the use of improved formulations of QCD on the lattice (improved actions), the development of new algorithms and the rapid increase in computing power available to lattice gauge theorists. In this article we describe simulations of full QCD using the improved staggered quark formalism, ``asqtad'' fermions. These simulations were carried out with two degenerate flavors of light quarks (up and down) and with one heavier flavor, the strange quark. Several light quark masses, down to about 3 times the physical light quark mass, and six lattice spacings have been used. These enable controlled continuum and chiral extrapolations of many low energy QCD observables. We review the improved staggered formalism, emphasizing both advantages and drawbacks. In particular, we review the procedure for removing unwanted staggered species in the continuum limit. We then describe the asqtad lattice ensembles created by the MILC Collaboration. All MILC lattice ensembles are publicly available, and they have been used extensively by a number of lattice gauge theory groups. We review physics results obtained with them, and discuss the impact of these results on phenomenology. Topics include the heavy quark potential, spectrum of light hadrons, quark masses, decay constant of light and heavy-light pseudoscalar mesons, semileptonic form factors, nucleon structure, scattering lengths and more. We conclude with a brief look at highly promising future prospects.Comment: 157 pages; prepared for Reviews of Modern Physics. v2: some rewriting throughout; references update

    Phase field modeling of electrochemistry I: Equilibrium

    Full text link
    A diffuse interface (phase field) model for an electrochemical system is developed. We describe the minimal set of components needed to model an electrochemical interface and present a variational derivation of the governing equations. With a simple set of assumptions: mass and volume constraints, Poisson's equation, ideal solution thermodynamics in the bulk, and a simple description of the competing energies in the interface, the model captures the charge separation associated with the equilibrium double layer at the electrochemical interface. The decay of the electrostatic potential in the electrolyte agrees with the classical Gouy-Chapman and Debye-H\"uckel theories. We calculate the surface energy, surface charge, and differential capacitance as functions of potential and find qualitative agreement between the model and existing theories and experiments. In particular, the differential capacitance curves exhibit complex shapes with multiple extrema, as exhibited in many electrochemical systems.Comment: v3: To be published in Phys. Rev. E v2: Added link to cond-mat/0308179 in References 13 pages, 6 figures in 15 files, REVTeX 4, SIUnits.sty. Precedes cond-mat/030817
    • …
    corecore