6 research outputs found

    A polar prediction model for learning to represent visual transformations

    Full text link
    All organisms make temporal predictions, and their evolutionary fitness level depends on the accuracy of these predictions. In the context of visual perception, the motions of both the observer and objects in the scene structure the dynamics of sensory signals, allowing for partial prediction of future signals based on past ones. Here, we propose a self-supervised representation-learning framework that extracts and exploits the regularities of natural videos to compute accurate predictions. We motivate the polar architecture by appealing to the Fourier shift theorem and its group-theoretic generalization, and we optimize its parameters on next-frame prediction. Through controlled experiments, we demonstrate that this approach can discover the representation of simple transformation groups acting in data. When trained on natural video datasets, our framework achieves better prediction performance than traditional motion compensation and rivals conventional deep networks, while maintaining interpretability and speed. Furthermore, the polar computations can be restructured into components resembling normalized simple and direction-selective complex cell models of primate V1 neurons. Thus, polar prediction offers a principled framework for understanding how the visual system represents sensory inputs in a form that simplifies temporal prediction

    Neuromatch Academy: a 3-week, online summer school in computational neuroscience

    Get PDF
    Neuromatch Academy (https://academy.neuromatch.io; (van Viegen et al., 2021)) was designed as an online summer school to cover the basics of computational neuroscience in three weeks. The materials cover dominant and emerging computational neuroscience tools, how they complement one another, and specifically focus on how they can help us to better understand how the brain functions. An original component of the materials is its focus on modeling choices, i.e. how do we choose the right approach, how do we build models, and how can we evaluate models to determine if they provide real (meaningful) insight. This meta-modeling component of the instructional materials asks what questions can be answered by different techniques, and how to apply them meaningfully to get insight about brain function

    Neuromatch Academy: a 3-week, online summer school in computational neuroscience

    Get PDF

    LabForComputationalVision/pyrtools: v1.0.2

    No full text
    <p>Small documentation-related updates. The main goal of this release is to trigger Zenodo, so we get a DOI.</p> <h2>What's Changed</h2> <ul> <li>Bug report template by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/19</li> <li>Readthedocs by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/22</li> <li>Update index.rst by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/23</li> <li>Update README.md by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/24</li> </ul> <p><strong>Full Changelog</strong>: https://github.com/LabForComputationalVision/pyrtools/compare/v1.0.1...v1.0.2</p&gt

    Pyrtools: tools for multi-scale image processing

    No full text
    <p>Largely documentation-related updates:</p> <h2>What's Changed</h2> <ul> <li>Bug report template by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/19</li> <li>Readthedocs by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/22</li> <li>Update index.rst by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/23</li> <li>Update README.md by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/24</li> <li>Release 1.0.2 updates by @billbrod in https://github.com/LabForComputationalVision/pyrtools/pull/25<ul> <li>adds zenodo badges, citation guide, citation.cff file, and updates <code>1.0.1</code> to <code>1.0.2</code> throughout package.</li> </ul> </li> </ul> <p><strong>Full Changelog</strong>: https://github.com/LabForComputationalVision/pyrtools/compare/v1.0.1...v1.0.2</p>If you use any component of pyrtools, please cite it as below
    corecore