663 research outputs found
Gaze Distribution Analysis and Saliency Prediction Across Age Groups
Knowledge of the human visual system helps to develop better computational
models of visual attention. State-of-the-art models have been developed to
mimic the visual attention system of young adults that, however, largely ignore
the variations that occur with age. In this paper, we investigated how visual
scene processing changes with age and we propose an age-adapted framework that
helps to develop a computational model that can predict saliency across
different age groups. Our analysis uncovers how the explorativeness of an
observer varies with age, how well saliency maps of an age group agree with
fixation points of observers from the same or different age groups, and how age
influences the center bias. We analyzed the eye movement behavior of 82
observers belonging to four age groups while they explored visual scenes.
Explorativeness was quantified in terms of the entropy of a saliency map, and
area under the curve (AUC) metrics was used to quantify the agreement analysis
and the center bias. These results were used to develop age adapted saliency
models. Our results suggest that the proposed age-adapted saliency model
outperforms existing saliency models in predicting the regions of interest
across age groups
A biologically inspired focus of attention model
With high definition, high resolution, technology becoming ever more popular, the vast amount of input available to modern object recognition systems can become overwhelming. Given an image taken from a high resolution digital camera, a target object may be very small in comparison to the entire image. Additionally, any non-target objects in the input are considered unnecessary data, or clutter. While many modern object recognition systems have been created to be over 90% accurate in the recognition task, adding large amounts of clutter to an input quickly degrades both the speed and accuracy of many models. To reduce both the size and amount of clutter in an input, a biologically inspired focus of attention model is developed. Utilizing biologically inspired feature extraction techniques, a feature based saliency model is built and used to simulate the psychological concept of a mental spotlight . The simulated mental spotlight searches through each frame of a video, focusing on small sub-regions of the larger input which are likely to contain important objects that need to be processed in further detail. Each of these interesting sub-regions are then able to be used as input by a modern object recognition system instead of raw camera data, increasing both the speed and accuracy of the recognition model
What does the amygdala contribute to social cognition?
The amygdala has received intense recent attention from neuroscientists investigating its function at the molecular, cellular, systems, cognitive, and clinical level. It clearly contributes to processing emotionally and socially relevant information, yet a unifying description and computational account have been lacking. The difficulty of tying together the various studies stems in part from the sheer diversity of approaches and species studied, in part from the amygdala's inherent heterogeneity in terms of its component nuclei, and in part because different investigators have simply been interested in different topics. Yet, a synthesis now seems close at hand in combining new results from social neuroscience with data from neuroeconomics and reward learning. The amygdala processes a psychological stimulus dimension related to saliency or relevance; mechanisms have been identified to link it to processing unpredictability; and insights from reward learning have situated it within a network of structures that include the prefrontal cortex and the ventral striatum in processing the current value of stimuli. These aspects help to clarify the amygdala's contributions to recognizing emotion from faces, to social behavior toward conspecifics, and to reward learning and instrumental behavior
Nociceptive-Evoked Potentials Are Sensitive to Behaviorally Relevant Stimulus Displacements in Egocentric Coordinates.
Feature selection has been extensively studied in the context of goal-directed behavior, where it is heavily driven by top-down factors. A more primitive version of this function is the detection of bottom-up changes in stimulus features in the environment. Indeed, the nervous system is tuned to detect fast-rising, intense stimuli that are likely to reflect threats, such as nociceptive somatosensory stimuli. These stimuli elicit large brain potentials maximal at the scalp vertex. When elicited by nociceptive laser stimuli, these responses are labeled laser-evoked potentials (LEPs). Although it has been shown that changes in stimulus modality and increases in stimulus intensity evoke large LEPs, it has yet to be determined whether stimulus displacements affect the amplitude of the main LEP waves (N1, N2, and P2). Here, in three experiments, we identified a set of rules that the human nervous system obeys to identify changes in the spatial location of a nociceptive stimulus. We showed that the N2 wave is sensitive to: (1) large displacements between consecutive stimuli in egocentric, but not somatotopic coordinates; and (2) displacements that entail a behaviorally relevant change in the stimulus location. These findings indicate that nociceptive-evoked vertex potentials are sensitive to behaviorally relevant changes in the location of a nociceptive stimulus with respect to the body, and that the hand is a particularly behaviorally important site
Do You See What Eyes See? Implementing Inattentional Blindness
This paper presents a computational model of visual attention incorporating a cognitive imperfection known as inattentional blindness. We begin by presenting four factors that determine successful attention allocation: conspicuity, mental workload, expectation and capacity. We then propose a framework to study the effects of those factors on an unexpected object and conduct an experiment to measure the corresponding subjective awareness level. Finally, we discuss the application of a visual attention model for conversational agents
Magnocellular bias in exogenous attention to biologically salient stimuli as revealed by manipulating their luminosity and color
This is the author’s final version of the article, and that the article has been accepted for publication in Journal of Cognitive NeuroscienceExogenous attention is a set of mechanisms that allow us to detect and reorient toward salient events—such as appetitive or aversive—that appear out of the current focus of attention. The nature of these mechanisms, particularly the involvement of the parvocellular and magnocellular visual processing systems, was explored. Thirty-four participants performed a demanding digit categorization task while salient (spiders or S) and neutral (wheels or W) stimuli were presented as distractors under two figure–ground formats: heterochromatic/isoluminant (exclusively processed by the parvocellular system, Par trials) and isochromatic/heteroluminant (preferentially processed by the magnocellular system, Mag trials). This resulted in four conditions: SPar, SMag, WPar, and WMag. Behavioral (RTs and error rates in the task) and electrophysiological (ERPs) indices of exogenous attention were analyzed. Behavior showed greater attentional capture by SMag than by SPar distractors and enhanced modulation of SMag capture as fear of spiders reported by participants increased. ERPs reflected a sequence from magnocellular dominant (P1p, ≃120 msec) to both magnocellular and parvocellular processing (N2p and P2a, ≃200 msec). Importantly, amplitudes in one N2p subcomponent were greater to SMag than to SPar and WMag distractors, indicating greater magnocellular sensitivity to saliency. Taking together, results support a magnocellular bias in exogenous attention toward distractors of any nature during initial processing, a bias that remains in later stages when biologically salient distractors are presen
A reinforcement-learning model of top-down attention based on a potential-action map.
No abstract availabl
Cognitive Control of Escape Behaviour
When faced with potential predators, animals instinctively decide whether there is a threat they should escape from, and also when, how, and where to take evasive action. While escape is often viewed in classical ethology as an action that is released upon presentation of specific stimuli, successful and adaptive escape behaviour relies on integrating information from sensory systems, stored knowledge, and internal states. From a neuroscience perspective, escape is an incredibly rich model that provides opportunities for investigating processes such as perceptual and value-based decision-making, or action selection, in an ethological setting. We review recent research from laboratory and field studies that explore, at the behavioural and mechanistic levels, how elements from multiple information streams are integrated to generate flexible escape behaviour
- …