42 research outputs found

    Perception of Relative Depth Interval: Systematic Biases in Perceived Depth

    Get PDF
    Given an estimate of the binocular disparity between a pair of points and an estimate of the viewing distance, or knowledge of eye position, it should be possible to obtain an estimate of their depth separation. Here we show that, when points are arranged in different vertical geometric configurations across two intervals, many observers find this task difficult. Those who can do the task tend to perceive the depth interval in one configuration as very different from depth in the other configuration. We explore two plausible explanations for this effect. The first is the tilt of the empirical vertical horopter: Points perceived along an apparently vertical line correspond to a physical line of points tilted backwards in space. Second, the eyes can rotate in response to a particular stimulus. Without compensation for this rotation, biases in depth perception would result. We measured cyclovergence indirectly, using a standard psychophysical task, while observers viewed our depth configuration. Biases predicted from error due either to cyclovergence or to the tilted vertical horopter were not consistent with the depth configuration results. Our data suggest that, even for the simplest scenes, we do not have ready access to metric depth from binocular disparity.</jats:p

    Perceptual uncertainty and line-call challenges in professional tennis

    No full text
    Fast-moving sports such as tennis require both players and match officials to make rapid accurate perceptual decisions about dynamic events in the visual world. Disagreements arise regularly, leading to disputes about decisions such as line calls. A number of factors must contribute to these disputes, including lapses in concentration, bias and gamesmanship. Fundamental uncertainty or variability in the sensory information supporting decisions must also play a role. Modern technological innovations now provide detailed and accurate physical information that can be compared against the decisions of players and officials. The present paper uses this psychophysical data to assess the significance of perceptual limitations as a contributor to real-world decisions in professional tennis.A detailed analysis is presented of a large body of data on line-call challenges in professional tennis tournaments over the last 2 years. Results reveal that the vast majority of challenges can be explained in a direct highly predictable manner by a simple model of uncertainty in perceptual information processing. Both players and line judges are remarkably accurate at judging ball bounce position, with a positional uncertainty of less than 40 mm.Line judges are more reliable than players. Judgements are more difficult for balls bouncing near base and service lines than those bouncing near side and centre lines. There is no evidence for significant errors in localization due to image motion

    Seeing and Hearing a Word: Combining Eye and Ear Is More Efficient than Combining the Parts of a Word

    Get PDF
    To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60:1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word's image and sound, and combine them efficiently. The brain's machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation. © 2013 Dubois et al.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    Superstitious perceptions reveal properties of internal representations

    No full text
    Everyone has seen a human face in a cloud, a pebble, or blots on a wall. Evidence of superstitious perceptions has been documented since classical antiquity, but has received little scientific attention. In the study reported here, we used superstitious perceptions in a new principled method to reveal the properties of unobservable object representations in memory. We stimulated the visual system with unstructured white noise. Observers firmly believed that they perceived the letter S in Experiment 1 and a smile on a face in Experiment 2. Using reverse correlation and computational analyses, we rendered the memory representations underlying these superstitious perceptions

    Creating Personalized Digital Human Models of Perception For Visual Analytics

    No full text
    Abstract. Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artifacts? Within many disciplines Digital Human Models and Standard Observer Models are widely used and have proven to be very useful for modeling users and simulating humans. In this paper, we create personalized digital human models of perception (Individual Observer Models), particularly focused on how humans see. Individual Observer Models capture how our bodies shape our perceptions. Individual Observer Models are useful for adapting and personalizing user interfaces and artifacts to suit individual users ’ bodies and perceptions. We introduce and demonstrate an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. An evaluation of the simulated eyes finds that they see eye charts the same as humans. Also demonstrated is the Individual Observer Model successfully making predictions about how easy or hard it is to see visual information and visual designs. The ability to predict and adapt visual information to maximize how effective it is is an important problem in visual design and analytics
    corecore