946,092 research outputs found

    From primal sketches to the recovery of intensity and reflectance representations

    Get PDF
    A local change in intensity (edge) is a characteristic that is preserved when an image is filtered through a bandpass filter. Primal sketch representations of images, using the bandpass-filtered data, have become a common process since Marr proposed his model for early human vision. Here, researchers move beyond the primal sketch extraction to the recovery of intensity and reflectance representations using only the bandpass-filtered data. Assessing the response of an ideal step edge to the Laplacian of Gaussian (NAb/A squared G) filter, they found that the resulting filtered data preserves the original change of intensity that created the edge in addition to the edge location. Using the filtered data, they can construct the primal sketches and recover the original (relative) intensity levels between the boundaries. It was found that the result of filtering an ideal step edge with the Intensity-Dependent Spatial Summation (IDS) filter preserves the actual intensity on both sides of the edge, in addition to the edge location. The IDS filter also preserves the reflectance ratio at the edge location. Therefore, one can recover the intensity levels between the edge boundaries as well as the (relative) reflectance representation. The recovery of the reflectance representation is of special interest as it erases shadowing degradations and other dependencies on temporal illumination. This method offers a new approach to low-level vision processing as well as to high data-compression coding. High compression can be gained by transmitting only the information associated with the edge location (edge primitives) that is necessary for the recover

    Hydrodynamic object recognition using pressure sensing

    No full text
    Hydrodynamic sensing is instrumental to fish and some amphibians. It also represents, for underwater vehicles, an alternative way of sensing the fluid environment when visual and acoustic sensing are limited. To assess the effectiveness of hydrodynamic sensing and gain insight into its capabilities and limitations, we investigated the forward and inverse problem of detection and identification, using the hydrodynamic pressure in the neighbourhood, of a stationary obstacle described using a general shape representation. Based on conformal mapping and a general normalization procedure, our obstacle representation accounts for all specific features of progressive perceptual hydrodynamic imaging reported experimentally. Size, location and shape are encoded separately. The shape representation rests upon an asymptotic series which embodies the progressive character of hydrodynamic imaging through pressure sensing. A dynamic filtering method is used to invert noisy nonlinear pressure signals for the shape parameters. The results highlight the dependence of the sensitivity of hydrodynamic sensing not only on the relative distance to the disturbance but also its bearing

    Interplay of Mott Transition and Ferromagnetism in the Orbitally Degenerate Hubbard Model

    Full text link
    A slave boson representation for the degenerate Hubbard model is introduced. The location of the metal to insulator transition that occurs at commensurate densities is shown to depend weakly on the band degeneracy M. The relative weights of the Hubbard sub-bands depend strongly on M, as well as the magnetic properties. It is also shown that a sizable Hund's rule coupling is required in order to have a ferromagnetic instability appearing. The metal to insulator transition driven by an increase in temperature is a strong function of it.Comment: 5 pages, revtex, 5 postscript figures, submitted to Phys. Rev.

    The effects of a task-irrelevant visual event on spatial working memory.

    Get PDF
    In the present experiment, we investigated whether the memory of a location is affected by the occurrence of an irrelevant visual event. Participants had to memorize the location of a dot. During the retention interval, a task-irrelevant stimulus was presented with abrupt onset somewhere in the visual field. Results showed that the spatial memory representation was affected by the occurrence of the external irrelevant event relative to a control condition in which there was no external event. Specifically, the memorized location was shifted towards the location of the task-irrelevant stimulus. This effect was only present when the onset was close in space to the memory representation. These findings suggest that the “internal ” spatial map used for keeping a location in spatial working memory and the “external ” spatial map that is affected by exogenous events in the outside world are either the same or tightly linked

    Spatial Proximity as a Determinant of Cognitive Control Context

    Get PDF
    The speed and flexibility of cognitive control is exemplified by the context-specific proportion congruency (CSPC) effect. Two locations on a computer screen may be biased to present either mostly congruent (MC) stimuli or mostly incongruent (MI) stimuli, necessitating rapid shifts of cognitive control in order to maximize speed and accuracy of responding. The episodic retrieval account has posited that the speed and flexibility of control can be explained by attentional settings being bound with contextual cues (e.g. the location at which a stimulus appears) into an episodic representation—allowing for settings to be retrieved automatically. However, what determines which setting is bound with which location cue has not yet been investigated. The present study posited that relative spatial proximity determines which setting is applied to a given location. In Experiment 1, six locations were arranged to manipulate relative spatial proximity. A biased (e.g., MC) location was placed on the top edge of a screen and a biased (e.g., MI) location was placed at the bottom. At the middle of the screen two MC (above fixation) and two MI (below fixation) locations were placed within close proximity. A CSPC effect was found between outer locations at the edge, while the middle locations were treated as a single 50% congruent location. Experiment 2 separated the middle locations to be closer to the outer locations of their same congruency. A CSPC effect was then found between the middle locations. Results are interpreted within the relative proximity hypothesis that posits multiple locations can influence the formation of an episodic representation when they are placed closer to one another relative to other locations

    Towards modelling group-robot interactions using a qualitative spatial representation

    Get PDF
    This paper tackles the problem of finding a suitable qualitative representation for robots to reason about activity spaces where they carry out tasks interacting with a group of people. The Qualitative Spatial model for Group Robot Interaction (QS-GRI) defines Kendon-formations depending on: (i) the relative location of the robot with respect to other individuals involved in that interaction; (ii) the individuals' orientation; (iii) the shared peri-personal distance; and (iv) the role of the individuals (observer, main character or interactive). The evolution of Kendon-formations between is studied, that is, how one formation is transformed into another. These transformations can depend on the role that the robot have, and on the amount of people involved.Postprint (author's final draft

    Emergence of Object Segmentation in Perturbed Generative Models

    Get PDF
    We introduce a novel framework to build a model that can learn how to segment objects from a collection of images without any human annotation. Our method builds on the observation that the location of object segments can be perturbed locally relative to a given background without affecting the realism of a scene. Our approach is to first train a generative model of a layered scene. The layered representation consists of a background image, a foreground image and the mask of the foreground. A composite image is then obtained by overlaying the masked foreground image onto the background. The generative model is trained in an adversarial fashion against a discriminator, which forces the generative model to produce realistic composite images. To force the generator to learn a representation where the foreground layer corresponds to an object, we perturb the output of the generative model by introducing a random shift of both the foreground image and mask relative to the background. Because the generator is unaware of the shift before computing its output, it must produce layered representations that are realistic for any such random perturbation. Finally, we learn to segment an image by defining an autoencoder consisting of an encoder, which we train, and the pre-trained generator as the decoder, which we freeze. The encoder maps an image to a feature vector, which is fed as input to the generator to give a composite image matching the original input image. Because the generator outputs an explicit layered representation of the scene, the encoder learns to detect and segment objects. We demonstrate this framework on real images of several object categories.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Spotlight presentatio
    corecore