6,943 research outputs found

    Spatial Stimuli Gradient Based Multifocus Image Fusion Using Multiple Sized Kernels

    Get PDF
    Multi-focus image fusion technique extracts the focused areas from all the source images and combines them into a new image which contains all focused objects. This paper proposes a spatial domain fusion scheme for multi-focus images by using multiple size kernels. Firstly, source images are pre-processed with a contrast enhancement step and then the soft and hard decision maps are generated by employing a sliding window technique using multiple sized kernels on the gradient images. Hard decision map selects the accurate focus information from the source images, whereas, the soft decision map selects the basic focus information and contains minimum falsely detected focused/unfocused regions. These decision maps are further processed to compute the final focus map. Gradient images are constructed through state-ofthe-art edge detection technique, spatial stimuli gradient sketch model, which computes the local stimuli from perceived brightness and hence enhances the essential structural and edge information. Detailed experiment results demonstrate that the proposed multi-focus image fusion algorithm performs better than the other well known state-of-the-art multifocus image fusion methods, in terms of subjective visual perception and objective quality evaluation metrics

    A 2.5-D representation of the human hand

    Get PDF
    Primary somatosensory maps in the brain represent the body as a discontinuous, fragmented set of 2-D skin regions. We nevertheless experience our body as a coherent 3-D volumetric object. The links between these different aspects of body representation, however, remain poorly understood. Perceiving the body’s location in external space requires that immediate afferent signals from the periphery be combined with stored representations of body size and shape. At least for the back of the hand, this body representation is massively distorted, in a highly stereotyped manner. Here we test whether a common pattern of distortions applies to the entire hand as a 3-D object, or whether each 2-D skin surface has its own characteristic pattern of distortion. Participants judged the location in external space of landmark points on the dorsal and palmar surfaces of the hand. By analyzing the internal configuration of judgments, we produced implicit maps of each skin surface. Qualitatively similar distortions were observed in both cases. The distortions were correlated across participants, suggesting that the two surfaces are bound into a common underlying representation. The magnitude of distortion, however, was substantially smaller on the palmar surface, suggesting that this binding is incomplete. The implicit representation of the human hand may be a hybrid, intermediate between a 2-D representation of individual skin surfaces and a 3-D representation of the hand as a volumetric object

    Mach Bands: How Many Models are Possible? Recent Experiemental Findings and Modeling Attempts

    Full text link
    Mach bands are illusory bright and dark bands seen where a luminance plateau meets a ramp, as in half-shadows or penumbras. A tremendous amount of work has been devoted to studying the psychophysics and the potential underlying neural circuitry concerning this phenomenon. A number of theoretical models have also been proposed, originating in the seminal studies of Mach himself. The present article reviews the main experimental findings after 1965 and the main recent theories of early vision that have attempted to discount for the effect. It is shown that the different theories share working principles and can be grouped in three clsses: a) feature-based; b) rule-based; and c) filling-in. In order to evaluate individual proposals it is necessary to consider them in the larger picture of visual science and to determine how they contribute to the understanding of vision in general.Air Force Office of Scientific Research (F49620-92-J-0334); Office of Naval Research (N00014-J-4100); COPPE/UFRJ, Brazi

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Computational physics of the mind

    Get PDF
    In the XIX century and earlier such physicists as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of mind. In this paper several approaches relevant to modeling of mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From computational point of view realistic models require massively parallel architectures

    Olfactory Orientation and Navigation in Humans.

    Get PDF
    Although predicted by theory, there is no direct evidence that an animal can define an arbitrary location in space as a coordinate location on an odor grid. Here we show that humans can do so. Using a spatial match-to-sample procedure, humans were led to a random location within a room diffused with two odors. After brief sampling and spatial disorientation, they had to return to this location. Over three conditions, participants had access to different sensory stimuli: olfactory only, visual only, and a final control condition with no olfactory, visual, or auditory stimuli. Humans located the target with higher accuracy in the olfaction-only condition than in the control condition and showed higher accuracy than chance. Thus a mechanism long proposed for the homing pigeon, the ability to define a location on a map constructed from chemical stimuli, may also be a navigational mechanism used by humans
    corecore