21 research outputs found

    Lightness perception in simple images: Testing the anchoring rules

    Get PDF
    One approach toward understanding how vision computes surface lightness is to first determine what principles govern lightness in simple stimuli and then test whether these hold for more complex stimuli

    Visual computation of surface lightness: Local contrast vs. frames of reference

    Get PDF
    Seeing black, white and gray surfaces, called lightness perception, might seem simple because white surfaces reflect 90% of the light they receive while black surfaces reflect only 3%, and the human retina is composed of light sensitive cells. The problem is that, because illumination varies from time to time and from place to place, any amount of light can be reflected from any shade of gray. Thus the amount of light reflected by an object, called luminance, says nothing about its lightness. Experts agree that the lightness of a surface can be computed only by using the surrounding context, but they disagree about how the context is used. We have tested an image in which two major classes of theory, contrast theories and frame-of-reference theories, make very different predictions regarding what gray shades will be seen by human observers. We show that when frame-of-reference is varied while contrast is held constant, lightness varies strongly. But when contrast is varied but frame-of-reference is held constant, little or no variation is seen. These results suggest that efforts to discover the exact algorithm by which the human visual system segments the image received by the retina into frames of reference should be given high priority

    Perceived Dynamic Range of HDR Images with no Semantic Information

    Get PDF
    Computing dynamic range of high dynamic range (HDR) content is an important procedure when selecting the test material, designing and validating algorithms, or analyzing aesthetic attributes of HDR content. It can be computed on a pixelbased level, measured through subjective tests or predicted using a mathematical model. However, all these methods have certain limitations. This paper investigates whether dynamic range of modeled images with no semantic information, but with the same first order statistics as the original, natural content, is perceived the same as for the corresponding natural images. If so, it would be possible to improve the perceived dynamic range (PDR) predictor model by using additional objective metrics, more suitable for such synthetic content. Within the subjective study, three experiments were conducted with 43 participants. The results show significant correlation between the mean opinion scores for the two image groups. Nevertheless, natural images still seem to provide better cues for evaluation of PDR

    The Cortex and the Critical Point

    Get PDF
    How the cerebral cortex operates near a critical phase transition point for optimum performance. Individual neurons have limited computational powers, but when they work together, it is almost like magic. Firing synchronously and then breaking off to improvise by themselves, they can be paradoxically both independent and interdependent. This happens near the critical point: when neurons are poised between a phase where activity is damped and a phase where it is amplified, where information processing is optimized, and complex emergent activity patterns arise. The claim that neurons in the cortex work best when they operate near the critical point is known as the criticality hypothesis. In this book John Beggs—one of the pioneers of this hypothesis—offers an introduction to the critical point and its relevance to the brain. Drawing on recent experimental evidence, Beggs first explains the main ideas underlying the criticality hypotheses and emergent phenomena. He then discusses the critical point and its two main consequences—first, scale-free properties that confer optimum information processing; and second, universality, or the idea that complex emergent phenomena, like that seen near the critical point, can be explained by relatively simple models that are applicable across species and scale. Finally, Beggs considers future directions for the field, including research on homeostatic regulation, quasicriticality, and the expansion of the cortex and intelligence. An appendix provides technical material; many chapters include exercises that use freely available code and data sets

    Neuro-Architecture

    Get PDF
    Architectural design and neuroscience at first glance may appear to be two seemingly different fields but for centuries intuitively, architects have been designing based on the principles of neuroscience. Architects through trial and error have gained knowledge of specific architectural elements and the potential these elements have to affect the user. Recently this intuition has been coined “neuro-architecture”. With the advancement of technology neuroscientist can accurately conclude how the human body will react to specific architectural stimuli. The proposal is focused on encouraging and furthering the symbiotic relationship between architecture and neuroscience in an attempt to promote architectural design that moves and elevates the human condition. The purpose of this thesis is to investigate the findings of neuroscience and promote their implementation into architectural design, creating a deeper understanding of how the human body relates to architectural surroundings. The methodology assumed closely follows the research typologies used in evidence-based design. The first is a literature review of the findings in neuroscience research and their application to architectural design. Second is an understanding of the anatomy of the body, the senses, and neurobiology as this is the basis in determining the body’s primal reaction to architectural stimuli. The final step of the process will be to create a prototypical design in which research findings bridged and reinforce the connection between neuroscience and architecture, resulting in a design that potentially has the ability to elevate the human experience

    Computational mechanisms for colour and lightness constancy

    Get PDF
    Attributes of colour images have been found which allow colour and lightness constancy to be computed without prior knowledge of the illumination, even in complex scenes with three -dimensional objects and multiple light sources of different colours. The ratio of surface reflectance colour can be immediately determined between any two image points, however distant. It is possible to determine the number of spectrally independent light sources, and to isolate the effect of each. Reflectance edges across which the illumination remains constant can be correctly identified.In a scene illuminated by multiple distant point sources of distinguishalbe colours, the spatial angle between the sources and their brightness ratios can be computed from the image alone. If there are three or more sources then reflectance constancy is immediately possible without use of additional knowledge.The results are an extension of Edwin Land's Retinex algorithm. They account for previously unexplained data such as Gilchrist's veiling luminances and his single- colour rooms.The validity of the algorithms has been demonstrated by implementing them in a series of computer programs. The computational methods do not follow the edge or region finding paradigms of previous vision mechanisms. Although the new reflectance constancy cues occur in all normal scenes, it is likely that human vision makes use of only some of them.In a colour image all the pixels of a single surface colour lie in a single structure in flux space. The dimension of the structure equals the number of illumination colours. The reflectance ratio between two regions is determined by the transformation between their structures. Parallel tracing of edge pairs in their respective structures identifies an edge of constant illumination, and gives the lightness ratio of each such edge. Enhanced noise reduction techniques for colour pictures follow from the natural constraints on the flux structures
    corecore