90,354 research outputs found
Constructing Social Systems through Computer-Mediated Communication
The question whether computer-mediated communication can support the formation of
genuine social systems is addressed in this paper. Our hypothesis, that technology creates
new forms of social systems beyond real-life milieus, includes the idea that the
technology itself may influence how social binding emerges within on-line environments.
In real-life communities, a precondition for social coherence is the existence of social
conventions. By observing interaction in virtual environments, we found the use of a
range of social conventions. These results were analyzed to determine how the use and
emergence of conventions might be influenced by the technology. One factor contributing
to the coherence of on-line social systems, but not the only one, appears to be the degree
of social presence mediated by the technology. We suggest that social systems can
emerge by computer-mediated communication and are shaped by the media of the
specific environment
Objects predict fixations better than early saliency
Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as “saliency maps,” are often built on the assumption that “early” features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to “interesting” objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural
scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting
through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated
A bottom–up model of spatial attention predicts human error patterns in rapid scene recognition
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom–up visual processing (attentional selection and/or recognition) or top–down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of “surprise” in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences
Hysteresis in human binocular fusion: temporalward and nasalward ranges
Fender and Julesz [J. Opt. Soc. Am. 57, 819 (1967)] moved pairs of retinally stabilized images across the temporalward
visual fields and found significant differences between the disparities that elicited fusion and the disparities at
which fusion was lost. They recognized this phenomenon as an example of hysteresis. In the work reported in this
paper, binocular retinally stabilized images of vertical dark bars on white backgrounds were moved into horizontal
disparity in both the nasalward and the temporalward directions. The limits of Panum's fusional area and the
hysteresis demonstrated by these limits were measured for two observers. The following results were obtained: (1)
the nasalward limits of Panum's fusional area and the hysteresis demonstrated by the nasalward limits do not differ
significantly from the temporalward limits and the hysteresis demonstrated by the temporalward limits; (2) the
limits of Panum's fusional area and the hysteresis demonstrated by these limits are not significantly different if one
stimulus moves across each retina or if one stimulus is held still on one retina and the other stimulus is moved across
the other retina; (3) the use of nonstabilized cross hairs for fixation decreases the hysteresis; and (4) the full
hysteresis effect can be elicited with a rate of change of disparity of 2 arcmin/sec
A detection theory account of change detection
Previous studies have suggested that visual short-term memory (VSTM) has a storage limit of approximately four items. However, the type of high-threshold (HT) model used to derive this estimate is based on a number of assumptions that have been criticized in other experimental paradigms (e.g., visual search). Here we report findings from nine experiments in which VSTM for color, spatial frequency, and orientation was modeled using a signal detection theory (SDT) approach. In Experiments 1-6, two arrays composed of multiple stimulus elements were presented for 100 ms with a 1500 ms ISI. Observers were asked to report in a yes/no fashion whether there was any difference between the first and second arrays, and to rate their confidence in their response on a 1-4 scale. In Experiments 1-3, only one stimulus element difference could occur (T = 1) while set size was varied. In Experiments 4-6, set size was fixed while the number of stimuli that might change was varied (T = 1, 2, 3, and 4). Three general models were tested against the receiver operating characteristics generated by the six experiments. In addition to the HT model, two SDT models were tried: one assuming summation of signals prior to a decision, the other using a max rule. In Experiments 7-9, observers were asked to directly report the relevant feature attribute of a stimulus presented 1500 ms previously, from an array of varying set size. Overall, the results suggest that observers encode stimuli independently and in parallel, and that performance is limited by internal noise, which is a function of set size
Ground-based hyperspectral analysis of the urban nightscape
Airborne hyperspectral cameras provide the basic information to estimate the energy wasted skywards by outdoor lighting systems, as well as to locate and identify their sources. However, a complete characterization of the urban light pollution levels also requires evaluating these effects from the city dwellers standpoint, e.g. the energy waste associated to the excessive illuminance on walls and pavements, light trespass, or the luminance distributions causing potential glare, to mention but a few. On the other hand, the spectral irradiance at the entrance of the human eye is the primary input to evaluate the possible health effects associated with the exposure to artificial light at night, according to the more recent models available in the literature. In this work we demonstrate the possibility of using a hyperspectral imager (routinely used in airborne campaigns) to measure the ground-level spectral radiance of the urban nightscape and to retrieve several magnitudes of interest for light pollution studies. We also present the preliminary results from a field campaign carried out in the downtown of Barcelona.Postprint (author's final draft
Controlled Interaction: Strategies For Using Virtual Reality To Study Perception
Immersive virtual reality systems employing head-mounted displays offer great promise for the investigation of perception and action, but there are well-documented limitations to most virtual reality systems. In the present article, we suggest strategies for studying perception/action interactions that try to depend on both scale-invariant metrics (such as power function exponents) and careful consideration of the requirements of the interactions under investigation. New data concerning the effect of pincushion distortion on the perception of surface orientation are presented, as well as data documenting the perception of dynamic distortions associated with head movements with uncorrected optics. A review of several successful uses of virtual reality to study the interaction of perception and action emphasizes scale-free analysis strategies that can achieve theoretical goals while minimizing assumptions about the accuracy of virtual simulations
Recommended from our members
Multi-line Adaptive Perimetry (MAP): A New Procedure for Quantifying Visual Field Integrity for Rapid Assessment of Macular Diseases.
PurposeIn order to monitor visual defects associated with macular degeneration (MD), we present a new psychophysical assessment called multiline adaptive perimetry (MAP) that measures visual field integrity by simultaneously estimating regions associated with perceptual distortions (metamorphopsia) and visual sensitivity loss (scotoma).MethodsWe first ran simulations of MAP with a computerized model of a human observer to determine optimal test design characteristics. In experiment 1, predictions of the model were assessed by simulating metamorphopsia with an eye-tracking device with 20 healthy vision participants. In experiment 2, eight patients (16 eyes) with macular disease completed two MAP assessments separated by about 12 weeks, while a subset (10 eyes) also completed repeated Macular Integrity Assessment (MAIA) microperimetry and Amsler grid exams.ResultsResults revealed strong repeatability of MAP and high accuracy, sensitivity, and specificity (0.89, 0.81, and 0.90, respectively) in classifying patient eyes with severe visual impairment. We also found a significant relationship in terms of the spatial patterns of performance across visual field loci derived from MAP and MAIA microperimetry. However, there was a lack of correspondence between MAP and subjective Amsler grid reports in isolating perceptually distorted regions.ConclusionsThese results highlight the validity and efficacy of MAP in producing quantitative maps of visual field disturbances, including simultaneous mapping of metamorphopsia and sensitivity impairment.Translational relevanceFuture work will be needed to assess applicability of this examination for potential early detection of MD symptoms and/or portable assessment on a home device or computer
- …