9,164 research outputs found
Recommended from our members
Fearful faces have a sensory advantage in the competition for awareness
Only a subset of visual signals give rise to a conscious percept. Threat signals, such as fearful faces, are particularly salient to human vision. Research suggests that fearful faces are evaluated without awareness and preferentially promoted to conscious perception. This agrees with evolutionary theories that posit a dedicated pathway specialized in processing threat-relevant signals. We propose an alternative explanation for this "fear advantage." Using psychophysical data from continuous flash suppression (CFS) and masking experiments, we demonstrate that awareness of facial expressions is predicted by effective contrast: the relationship between their Fourier spectrum and the contrast sensitivity function. Fearful faces have higher effective contrast than neutral expressions and this, not threat content, predicts their enhanced access to awareness. Importantly, our findings do not support the existence of a specialized mechanism that promotes threatening stimuli to awareness. Rather, our data suggest that evolutionary or learned adaptations have molded the fearful expression to exploit our general-purpose sensory mechanisms
Visually Lossless Perceptual Image Coding Based on Natural- Scene Masking Models
Perceptual coding is a subdiscipline of image and video coding that uses models of human visual perception to achieve improved compression efficiency. Nearly, all image and video coders have included some perceptual coding strategies, most notably visual masking. Today, modern coders capitalize on various basic forms of masking such as the fact that distortion is harder to see in very dark and very bright regions, in regions with higher frequency content, and in temporal regions with abrupt changes. However, beyond these obvious forms of masking, there are many other masking phenomena that occur (and co-occur) when viewing natural imagery. In this chapter, we present our latest research in perceptual image coding using natural-scene masking models. We specifically discuss: (1) how to predict local distortion visibility using improved natural-scene masking models and (2) how to apply the models to high efficiency video coding (HEVC). As we will demonstrate, these techniques can offer 10–20% fewer bits than baseline HEVC in the ultra-high-quality regime
Automated Satellite-Based Landslide Identification Product for Nepal
Landslide event inventories are a vital resource for landslide susceptibility and forecasting applications. However, landslide inventories can vary in accuracy, availability, and timeliness as a result of varying detection methods, reporting, and data availability. This study presents an approach to use publicly available satellite data and open source software to automate a landslide detection process called the Sudden Landslide Identification Product (SLIP). SLIP utilizes optical data from the Landsat 8 OLI sensor, elevation data from the Shuttle Radar Topography Mission (SRTM), and precipitation data from the Global Precipitation Measurement (GPM) mission to create a reproducible and spatially customizable landslide identification product. The SLIP software applies change detection algorithms to identify areas of new bare-earth exposures that may be landslide events. The study also presents a precipitation monitoring tool that runs alongside SLIP called the Detecting Real-time Increased Precipitation (DRIP) model that helps identify the timing of potential landslide events detected by SLIP. Using SLIP and DRIP together, landslide detection is improved by reducing problems related to accuracy, availability, and timeliness that are prevalent in the state-of-the-art of landslide detection. A case study and validation exercise was performed in Nepal for images acquired between 2014 and 2015. Preliminary validation results suggest 56% model accuracy, with errors of commission often resulting from newly cleared agricultural areas. These results suggest that SLIP is an important first attempt in an automated framework that can be used for medium resolution regional landslide detection, although it requires refinement before being fully realized as an operational tool
Automatic Video Quality Measurement System And Method Based On Spatial-temporal Coherence Metrics
An automatic video quality (AVQ) metric system for evaluating the quality of processed video and deriving an estimate of a subjectively determined function called Mean Time Between Failures (MTBF). The AVQ system has a blockiness metric, a streakiness metric, and a blurriness metric. The blockiness metric can be used to measure compression artifacts in processed video. The streakiness metric can be used to measure network artifacts in the processed video. The blurriness metric can measure the degradation (i.e., blurriness) of the images in the processed video to detect compression artifacts.Georgia Tech Research Corporatio
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
The stellar halo of isolated central galaxies in the Hyper Suprime-Cam imaging survey
We study the faint stellar halo of isolated central galaxies, by stacking
galaxy images in the HSC survey and accounting for the residual sky background
sampled with random points. The surface brightness profiles in HSC -band are
measured for a wide range of galaxy stellar masses
() and out to 120 kpc. Failing to account for
the stellar halo below the noise level of individual images will lead to
underestimates of the total luminosity by . Splitting galaxies
according to the concentration parameter of their light distributions, we find
that the surface brightness profiles of low concentration galaxies drop faster
between 20 and 100 kpc than those of high concentration galaxies. Albeit the
large galaxy-to-galaxy scatter, we find a strong self-similarity of the stellar
halo profiles. They show unified forms once the projected distance is scaled by
the halo virial radius. The colour of galaxies is redder in the centre and
bluer outside, with high concentration galaxies having redder and more
flattened colour profiles. There are indications of a colour minimum, beyond
which the colour of the outer stellar halo turns red again. This colour
minimum, however, is very sensitive to the completeness in masking satellite
galaxies. We also examine the effect of the extended PSF in the measurement of
the stellar halo, which is particularly important for low mass or low
concentration galaxies. The PSF-corrected surface brightness profile can be
measured down to 31 at 3-
significance. PSF also slightly flattens the measured colour profiles.Comment: accepted by MNRAS - Significant changes have been made compared with
the first version, including discussions on the extended PSF wings,
robustness of our results to source detection and masking thresholds and more
detailed investigations on the indications of positive colour gradient
- …