752 research outputs found

    Serial dependence in timing at the perceptual level being modulated by working memory

    Get PDF
    Recent experiences bias the perception of following stimuli, as has been verified in various kinds of experiments in visual perception. This phenomenon, known as serial dependence, may reflect mechanisms to maintain perceptual stability. In the current study, we examined several key properties of serial dependence in temporal perception. Firstly, we examined the source of the serial dependence effect in temporal perception. We found that perception without motor reproduction is sufficient to induce the sequential effect; motor reproduction caused a stronger effect and is achieved by biasing the perception of the future target duration rather than directly influencing the subsequent movement. Secondly, we ask how working memory influences serial dependence in a temporal reproduction task. By varying the delay time between standard duration and the reproduction, we showed that the strength of serial dependence is enhanced as the delay increased. Those features of serial dependence are consistent with what has been observed in visual perceptual tasks, for example, orientation perception or location perception. The similarities between the visual and the timing tasks may suggest a similar neural coding mechanism of magnitude between the visual stimuli and the duration

    The two‐ to three‐second time window of shot durations in movies

    Get PDF
    Movie shots of singular scenes have a preferential duration of 2 to 3 s regardless of producers, movie types, and cultural environments. This observation suggests that the temporal structure of movies matches a neural mechanism of information processing in the time domain

    Detecting Extra Dimension By the Experiment of the Quantum Gravity Induced Entanglement of Masses

    Full text link
    It is believed that gravity may be regarded as a quantum coherent mediator. In this work we propose a plan using the Quantum Gravity Induced Entanglement of Masses (QGEM) experiment to test the extra dimension. The experiment involves two freely falling test masses passing though a Stern-Gerlach-like device. We study the entanglement witness of them in the framework of Randall-Sundrum II model (RS-II). It turns out that the system would reach entangled more rapidly in the presence of extra dimension. In particular, this is more significant for large radius of extra dimension

    Representing Volumetric Videos as Dynamic MLP Maps

    Full text link
    This paper introduces a novel representation of volumetric videos for real-time view synthesis of dynamic scenes. Recent advances in neural scene representations demonstrate their remarkable capability to model and render complex static scenes, but extending them to represent dynamic scenes is not straightforward due to their slow rendering speed or high storage cost. To solve this problem, our key idea is to represent the radiance field of each frame as a set of shallow MLP networks whose parameters are stored in 2D grids, called MLP maps, and dynamically predicted by a 2D CNN decoder shared by all frames. Representing 3D scenes with shallow MLPs significantly improves the rendering speed, while dynamically predicting MLP parameters with a shared 2D CNN instead of explicitly storing them leads to low storage cost. Experiments show that the proposed approach achieves state-of-the-art rendering quality on the NHR and ZJU-MoCap datasets, while being efficient for real-time rendering with a speed of 41.7 fps for 512×512512 \times 512 images on an RTX 3090 GPU. The code is available at https://zju3dv.github.io/mlp_maps/.Comment: Accepted to CVPR 2023. The first two authors contributed equally to this paper. Project page: https://zju3dv.github.io/mlp_maps
    corecore