23 research outputs found

    Recovering metric properties of objects through spatiotemporal interpolation

    Get PDF
    AbstractSpatiotemporal interpolation (STI) refers to perception of complete objects from fragmentary information across gaps in both space and time. It differs from static interpolation in that requirements for interpolation are not met in any static frame. It has been found that STI produced objective performance advantages in a shape discrimination paradigm for both illusory and occluded objects when contours met conditions of spatiotemporal relatability. Here we report psychophysical studies testing whether spatiotemporal interpolation allows recovery of metric properties of objects. Observers viewed virtual triangles specified only by sequential partial occlusions of background elements by their vertices (the STI condition) and made forced choice judgments of the object’s size relative to a reference standard. We found that length could often be accurately recovered for conditions where fragments were relatable and formed illusory triangles. In the first control condition, three moving dots located at the vertices provided the same spatial and timing information as the virtual object in the STI condition but did not induce perception of interpolated contours or a coherent object. In the second control condition oriented line segments were added to the dots and mid-points between the dots in a way that did not induce perception of interpolated contours. Control stimuli did not lead to accurate size judgments. We conclude that spatiotemporal interpolation can produce representations, from fragmentary information, of metric properties in addition to shape

    Task set and instructions influence the weight of figural priors: A psychophysical study with extremal edges and familiar configuration

    No full text
    In figure–ground organization, the figure is defined as a region that is both “shaped” and “nearer.” Here we test whether changes in task set and instructions can alter the outcome of the cross-border competition between figural priors that underlies figure assignment. Extremal edge (EE), a relative distance prior, has been established as a strong figural prior when the task is to report “which side is nearer?” In three experiments using bipartite stimuli, EEs competed and cooperated with familiar configuration, a shape prior for figure assignment in a “which side is shaped?” task.” Experiment 1 showed small but significant effects of familiar configuration for displays sketching upright familiar objects, although “shaped-side” responses were predominantly determined by EEs. In Experiment 2, instructions regarding the possibility of perceiving familiar shapes were added. Now, although EE remained the dominant prior, the figure was perceived on the familiar-configuration side of the border on a significantly larger percentage of trials across all display types. In Experiment 3, both task set (nearer/shaped) and the presence versus absence of instructions emphasizing that familiar objects might be present were manipulated within subjects. With familiarity thus “primed,” effects of task set emerged when EE and familiar configuration favored opposite sides as figure. Thus, changing instructions can modulate the weighing of figural priors for shape versus distance in figure assignment in a manner that interacts with task set. Moreover, we show that the influence of familiar parts emerges in participants without medial temporal lobe/ perirhinal cortex brain damage when instructions emphasize that familiar objects might be present

    Convexity vs. Implied-Closure in Figure-Ground Organization

    No full text

    Advance information modulates the global effect even without instruction on where to look

    No full text
    When observers are asked to make an eye movement to a visual target in the presence of a near distractor, their eyes tend to land on a position in between the target and the distractor, an effect known as the global effect. While it was initially believed that the global effect is a mandatory eye movement strategy, recent studies have shown that explicit instructions to make an eye movement to a certain part of the scene can overrule the effect. We here investigate whether such top-down influences are also found when people are not actively involved in an explicit eye movement task, but instead, make eye movements in the service of another task. Participants were presented with arrays of yellow and green discs, each containing a letter, and were asked to identify a target letter. Because the discs were presented away from fixation, participants made an eye movement to the array of discs on most of the trials. An analysis of the landing sites of these eye movements revealed that, even without an explicit instruction, observers take the advance information about the colour of the disc containing the target into account before moving their eyes. Moreover, when asking participants to maintain fixation for intervals of different durations, it was found that the implicit top-down influences operated on a very similar time-scale as previously observed for explicit eye movement instructions

    Rapid Communication Relative image size, not eye position, determines eye dominance switches

    No full text
    Abstract A recent paper examined eye dominance with the eyes in forward and eccentric gaze [Vision Res. 41 (2001) 1743. When observers were looking to the left, the left eye tended to dominate and when they were looking to the right, the right eye tended to dominate. The authors attributed the switch in eye dominance to extra-retinal signals associated with horizontal eye position. However, when one looks at a near object on the left, the image in the left eye is larger than the one in the right eye, and when one looks to the right, the opposite occurs. Thus, relative image size could also trigger switches in eye dominance. We used a cue-conflict paradigm to determine whether eye position or relative image size is the determinant of eye-dominance switches with changes in gaze angle. When eye position and relative image size were varied independently, there was no consistent effect of eye position. Relative image size appears to be the sole determinant of the switch

    Low-level pixelated representations suffice for aesthetically pleasing contrast adjustment in photographs

    No full text
    Today’s web-based automatic image enhancement algorithms decide to apply an enhancement operation by searching for “similar” images in an online database of images and then applying the same level of enhancement as the image in the database. Two key bottlenecks in these systems are the storage cost for images and the cost of the search. Based on the principles of computational aesthetics, we consider storing task-relevant aesthetic summaries, a set of features which are sufficient to predict the level at which an image enhancement operation should be performed, instead of the entire image. The empirical question, then, is to ensure that the reduced representation indeed maintains enough information so that the resulting operation is perceived to be aesthetically pleasing to humans. We focus on the contrast adjustment operation, an important image enhancement primitive. We empirically study the efficacy of storing a pixelated summary of the 16 most representative colors of an image and performing contrast adjustments on this representation. We tested two variants of the pixelated image: a “mid-level pixelized version” that retained spatial relationships and allowed for region segmentation and grouping as in the original image and a “low-level pixelized-random version” which only retained the colors by randomly shuffling the 50 x 50 pixels. In an empirical study on 25 human subjects, we demonstrate that the preferred contrast for the low-level pixelized-random image is comparable to the original image even though it retains very few bits and no semantic information, thereby making it ideal for image matching and retrieval for automated contrast editing. In addition, we use an eye tracking study to show that users focus only on a small central portion of the low-level image, thus improving the performance of image search over commonly used computer vision algorithms to determine interesting key points

    Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    No full text
    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons

    Predictors for response time model.

    No full text
    <p>Note: ** <i>p</i><0.01. Estimates are arranged by coefficient magnitude in descending order (see text). L – latent, K – known print, LxK – interaction.</p
    corecore