32,200 research outputs found

    The Effect of Predictability on Subjective Duration

    Get PDF
    Events can sometimes appear longer or shorter in duration than other events of equal length. For example, in a repeated presentation of auditory or visual stimuli, an unexpected object of equivalent duration appears to last longer. Illusions of duration distortion beg an important question of time representation: when durations dilate or contract, does time in general slow down or speed up during that moment? In other words, what entailments do duration distortions have with respect to other timing judgments? We here show that when a sound or visual flicker is presented in conjunction with an unexpected visual stimulus, neither the pitch of the sound nor the frequency of the flicker is affected by the apparent duration dilation. This demonstrates that subjective time in general is not slowed; instead, duration judgments can be manipulated with no concurrent impact on other temporal judgments. Like spatial vision, time perception appears to be underpinned by a collaboration of separate neural mechanisms that usually work in concert but are separable. We further show that the duration dilation of an unexpected stimulus is not enhanced by increasing its saliency, suggesting that the effect is more closely related to prediction violation than enhanced attention. Finally, duration distortions induced by violations of progressive number sequences implicate the involvement of high-level predictability, suggesting the involvement of areas higher than primary visual cortex. We suggest that duration distortions can be understood in terms of repetition suppression, in which neural responses to repeated stimuli are diminished

    Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

    Full text link
    We present a deep neural network-based approach to image quality assessment (IQA). The network is trained end-to-end and comprises ten convolutional layers and five pooling layers for feature extraction, and two fully connected layers for regression, which makes it significantly deeper than related IQA models. Unique features of the proposed architecture are that: 1) with slight adaptations it can be used in a no-reference (NR) as well as in a full-reference (FR) IQA setting and 2) it allows for joint learning of local quality and local weights, i.e., relative importance of local quality to the global quality estimate, in an unified framework. Our approach is purely data-driven and does not rely on hand-crafted features or other types of prior domain knowledge about the human visual system or image statistics. We evaluate the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the LIVE In the wild image quality challenge database and show superior performance to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation shows a high ability to generalize between different databases, indicating a high robustness of the learned features

    How is Gaze Influenced by Image Transformations? Dataset and Model

    Full text link
    Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive. Most of current studies on human attention and saliency modeling have used high quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial network (dubbed GazeGAN). A modified UNet is proposed as the generator of the GazeGAN, which combines classic skip connections with a novel center-surround connection (CSC), in order to leverage multi level features. We also propose a histogram loss based on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in terms of luminance distribution. Extensive experiments and comparisons over 3 datasets indicate that GazeGAN achieves the best performance in terms of popular saliency evaluation metrics, and is more robust to various perturbations. Our code and data are available at: https://github.com/CZHQuality/Sal-CFS-GAN

    Neuronal Distortions of Reward Probability without Choice

    Get PDF
    Reward probability crucially determines the value of outcomes. A basic phenomenon, defying explanation by traditional decision theories, is that people often overweigh small and underweigh large probabilities in choices under uncertainty. However, the neuronal basis of such reward probability distortions and their position in the decision process are largely unknown. We assessed individual probability distortions with behavioral pleasantness ratings and brain imaging in the absence of choice. Dorsolateral frontal cortex regions showed experience dependent overweighting of small, and underweighting of large, probabilities whereas ventral frontal regions showed the opposite pattern. These results demonstrate distorted neuronal coding of reward probabilities in the absence of choice, stress the importance of experience with probabilistic outcomes and contrast with linear probability coding in the striatum. Input of the distorted probability estimations to decision-making mechanisms are likely to contribute to well known inconsistencies in preferences formalized in theories of behavioral economics

    Closed loop models for analyzing the effects of simulator characteristics

    Get PDF
    The optimal control model of the human operator is used to develop closed loop models for analyzing the effects of (digital) simulator characteristics on predicted performance and/or workload. Two approaches are considered: the first utilizes a continuous approximation to the discrete simulation in conjunction with the standard optimal control model; the second involves a more exact discrete description of the simulator in a closed loop multirate simulation in which the optimal control model simulates the pilot. Both models predict that simulator characteristics can have significant effects on performance and workload
    • …
    corecore