38 research outputs found

    New Young Star Candidates in BRC 27 and BRC 34

    Get PDF
    We used archival Spitzer Space Telescope mid-infrared data to search for young stellar objects (YSOs) in the immediate vicinity of two bright-rimmed clouds, BRC 27 (part of CMa R1) and BRC 34 (part of the IC 1396 complex). These regions both appear to be actively forming young stars, perhaps triggered by the proximate OB stars. In BRC 27, we find clear infrared excesses around 22 of the 26 YSOs or YSO candidates identified in the literature, and identify 16 new YSO candidates that appear to have IR excesses. In BRC 34, the one literature-identified YSO has an IR excess, and we suggest 13 new YSO candidates in this region, including a new Class I object. Considering the entire ensemble, both BRCs are likely of comparable ages, within the uncertainties of small number statistics and without spectroscopy to confirm or refute the YSO candidates. Similarly, no clear conclusions can yet be drawn about any possible age gradients that may be present across the BRCs.Comment: 54 pages, 19 figures, accepted by A

    Fixation durations in scene viewing:Modeling the effects of local image features, oculomotor parameters, and task

    Get PDF
    Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1° circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation’s duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general

    Does it look safe? An eye tracking study into the visual aspects of fear of crime

    Get PDF
    Studies of fear of crime often focus on demographic and social factors, but these can be difficult to change. Studies of visual aspects have suggested that features reflecting incivilities, such as litter, graffiti, and vandalism increase fear of crime, but methods often rely on participants actively mentioning such aspects, and more subtle, less conscious aspects may be overlooked. To address these concerns, the present study examined people’s eye movements while they judged scenes for safety. Forty current and former university students were asked to rate images of day-time and night-time scenes of Lincoln, UK (where they studied) and Egham, UK (unfamiliar location) for safety, maintenance and familiarity, while their eye movements were recorded. Another twenty-five observers not from Lincoln or Egham rated the same images in an internet survey. Ratings showed a strong association between safety and maintenance and lower safety ratings for night-time scenes for both groups, in agreement with earlier findings. Eye movements of the Lincoln participants showed increased dwell times on buildings, houses, and vehicles during safety judgments, and increased dwell times on streets, pavements, and markers of incivilities for maintenance. Results confirm that maintenance plays an important role in perceptions of safety, but eye movements suggest that observers also look for indicators of current or recent presence of people

    Defining eye-fixation sequences across individuals and tasks: the Binocular-Individual Threshold (BIT) algorithm

    Get PDF
    We propose a new fully automated velocity-based algorithm to identify fixations from eye-movement records of both eyes, with individual-specific thresholds. The algorithm is based on robust minimum determinant covariance estimators (MDC) and control chart procedures, and is conceptually simple and computationally attractive. To determine fixations, it uses velocity thresholds based on the natural within-fixation variability of both eyes. It improves over existing approaches by automatically identifying fixation thresholds that are specific to (a) both eyes, (b) x- and y- directions, (c) tasks, and (d) individuals. We applied the proposed Binocular-Individual Threshold (BIT) algorithm to two large datasets collected on eye-trackers with different sampling frequencies, and compute descriptive statistics of fixations for larger samples of individuals across a variety of tasks, including reading, scene viewing, and search on supermarket shelves. Our analysis shows that there are considerable differences in the characteristics of fixations not only between these tasks, but also between individuals

    Looking to Score: The Dissociation of Goal Influence on Eye Movement and Meta-Attentional Allocation in a Complex Dynamic Natural Scene

    Get PDF
    Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers’ beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer’s beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior

    Temporal Information Processing in Short- and Long-Term Memory of Patients with Schizophrenia

    Get PDF
    Cognitive deficits of patients with schizophrenia have been largely recognized as core symptoms of the disorder. One neglected factor that contributes to these deficits is the comprehension of time. In the present study, we assessed temporal information processing and manipulation from short- and long-term memory in 34 patients with chronic schizophrenia and 34 matched healthy controls. On the short-term memory temporal-order reconstruction task, an incidental or intentional learning strategy was deployed. Patients showed worse overall performance than healthy controls. The intentional learning strategy led to dissociable performance improvement in both groups. Whereas healthy controls improved on a performance measure (serial organization), patients improved on an error measure (inappropriate semantic clustering) when using the intentional instead of the incidental learning strategy. On the long-term memory script-generation task, routine and non-routine events of everyday activities (e.g., buying groceries) had to be generated in either chronological or inverted temporal order. Patients were slower than controls at generating events in the chronological routine condition only. They also committed more sequencing and boundary errors in the inverted conditions. The number of irrelevant events was higher in patients in the chronological, non-routine condition. These results suggest that patients with schizophrenia imprecisely access temporal information from short- and long-term memory. In short-term memory, processing of temporal information led to a reduction in errors rather than, as was the case in healthy controls, to an improvement in temporal-order recall. When accessing temporal information from long-term memory, patients were slower and committed more sequencing, boundary, and intrusion errors. Together, these results suggest that time information can be accessed and processed only imprecisely by patients who provide evidence for impaired time comprehension. This could contribute to symptomatic cognitive deficits and strategic inefficiency in schizophrenia

    Scenes, saliency maps and scanpaths

    Get PDF
    The aim of this chapter is to review some of the key research investigating how people look at pictures. In particular, my goal is to provide theoretical background for those that are new to the field, while also explaining some of the relevant methods and analyses. I begin by introducing eye movements in the context of natural scene perception. As in other complex tasks, eye movements provide a measure of attention and information processing over time, and they tell us about how the foveated visual system determines what to prioritise. I then describe some of the many measures which have been derived to summarize where people look in complex images. These include global measures, analyses based on regions of interest and comparisons based on heat maps. A particularly popular approach for trying to explain fixation locations is the saliency map approach, and the first half of the chapter is mostly devoted to this topic. A large number of papers and models are built on this approach, but it is also worth spending time on this topic because the methods involved have been used across a wide range of applications. The saliency map approach is based on the fact that the visual system has topographic maps of visual features, that contrast within these features seems to be represented and prioritized, and that a central representation can be used to control attention and eye movements. This approach, and the underlying principles, has led to an increase in the number of researchers using complex natural scenes as stimuli. It is therefore important that those new to the field are familiar with saliency maps, their usage, and their pitfalls. I describe the original implementation of this approach (Itti & Koch, 2000), which uses spatial filtering at different levels of coarseness and combines them in an attempt to identify the regions which stand out from their background. Evaluating this model requires comparing fixation locations to model predictions. Several different experimental and comparison methods have been used, but most recent research shows that bottom-up guidance is rather limited in terms of predicting real eye movements. The second part of the chapter is largely concerned with measuring eye movement scanpaths. Scanpaths are the sequential patterns of fixations and saccades made when looking at something for a period of time. They show regularities which may reflect top-down attention, and some have attempted to link these to memory and an individual’s mental model of what they are looking at. While not all researchers will be testing hypotheses about scanpaths, an understanding of the underlying methods and theory will be of benefit to all. I describe the theories behind analyzing eye movements in this way, and various methods which have been used to represent and compare them. These methods allow one to quantify the similarity between two viewing patterns, and this similarity is linked to both the image and the observer. The last part of the chapter describes some applications of eye movements in image viewing. The methods discussed can be applied to complex images, and therefore these experiments can tell us about perception in art and marketing, as well as about machine vision

    Pupillary Stroop effects

    Get PDF
    We recorded the pupil diameters of participants performing the words’ color-naming Stroop task (i.e., naming the color of a word that names a color). Non-color words were used as baseline to firmly establish the effects of semantic relatedness induced by color word distractors. We replicated the classic Stroop effects of color congruency and color incongruency with pupillary diameter recordings: relative to non-color words, pupil diameters increased for color distractors that differed from color responses, while they reduced for color distractors that were identical to color responses. Analyses of the time courses of pupil responses revealed further differences between color-congruent and color-incongruent distractors, with the latter inducing a steep increase of pupil size and the former a relatively lower increase. Consistent with previous findings that have demonstrated that pupil size increases as task demands rise, the present results indicate that pupillometry is a robust measure of Stroop interference, and it represents a valuable addition to the cognitive scientist’s toolbox

    Does oculomotor inhibition of return influence fixation probability during scene search?

    Get PDF
    Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This “foraging” hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1º of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search
    corecore