3,628 research outputs found

    How is Gaze Influenced by Image Transformations? Dataset and Model

    Full text link
    Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive. Most of current studies on human attention and saliency modeling have used high quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial network (dubbed GazeGAN). A modified UNet is proposed as the generator of the GazeGAN, which combines classic skip connections with a novel center-surround connection (CSC), in order to leverage multi level features. We also propose a histogram loss based on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in terms of luminance distribution. Extensive experiments and comparisons over 3 datasets indicate that GazeGAN achieves the best performance in terms of popular saliency evaluation metrics, and is more robust to various perturbations. Our code and data are available at: https://github.com/CZHQuality/Sal-CFS-GAN

    Project SEMACODE : a scale-invariant object recognition system for content-based queries in image databases

    Get PDF
    For the efficient management of large image databases, the automated characterization of images and the usage of that characterization for searching and ordering tasks is highly desirable. The purpose of the project SEMACODE is to combine the still unsolved problem of content-oriented characterization of images with scale-invariant object recognition and modelbased compression methods. To achieve this goal, existing techniques as well as new concepts related to pattern matching, image encoding, and image compression are examined. The resulting methods are integrated in a common framework with the aid of a content-oriented conception. For the application, an image database at the library of the university of Frankfurt/Main (StUB; about 60000 images), the required operations are developed. The search and query interfaces are defined in close cooperation with the StUB project “Digitized Colonial Picture Library”. This report describes the fundamentals and first results of the image encoding and object recognition algorithms developed within the scope of the project

    The segmentation of visual form

    Get PDF
    The argument of this work is that, despite the massive body of literature that has accumulated in the decades since the discovery of 'gestalt' as the ruling principle of perception, little genuine progress in solving the problem posed by the visual perception of form has been made. This state of affairs is attributed, moreover, to a fundementally inadequate formulation of this problem. It is not enough merely to revise this or that theory, or this or that experimental design, if the argument is correct; rather, it is necessary to revise the formulation of the form problem upon which theory and experimental design rest. Thus, the reformulation suggested is that (a) form is the unit which segments space, and consequently that (b) the problem posed by this unit is essentially that of its segmentation/formation of space, rather than that of its recognition/conservation through change in space; the former is the primary, the latter the secondary, (psycho-physical) problem posed by the visual perception of form. This work also contains a segmentation (spatial/holistic) theory of form, and five experiments designed to test this theory against current recognition (dimensional/analytic) theories of form (for example, see Corcoran, 1971); these experiments are all concerned with different facets of the role played by contour in visual perception, and they provide some evidence for the former, and against the latter, type of theory. It should be pointed out that both in the main body of the text, and in an appendix, it is argued that segmentation is primarily two-dimensional rather than three-dimensional: two-dimensional 'figure'form is primary over three-dimensional 'object' form in perceptual development, and indeed, the latter is constructed from the former. (This hypothesis is part of a more general point of view about cognition, namely that there is an a priori spatial system which is used to process perceptual input, and establish in it the spatial structure of perceptual experience, but one whose conceptual implications and properties become available for symbolisation and thinking when it is freed from the task of perceptual processing by being lifted out of perception into a visual form of representation which Bruner terms 'ikonic' (See Bruner et al., 1966).)<p
    • …
    corecore