2 research outputs found

    Supervised Generative Reconstruction: An Efficient Way To Flexibly Store and Recognize Patterns

    Full text link
    Matching animal-like flexibility in recognition and the ability to quickly incorporate new information remains difficult. Limits are yet to be adequately addressed in neural models and recognition algorithms. This work proposes a configuration for recognition that maintains the same function of conventional algorithms but avoids combinatorial problems. Feedforward recognition algorithms such as classical artificial neural networks and machine learning algorithms are known to be subject to catastrophic interference and forgetting. Modifying or learning new information (associations between patterns and labels) causes loss of previously learned information. I demonstrate using mathematical analysis how supervised generative models, with feedforward and feedback connections, can emulate feedforward algorithms yet avoid catastrophic interference and forgetting. Learned information in generative models is stored in a more intuitive form that represents the fixed points or solutions of the network and moreover displays similar difficulties as cognitive phenomena. Brain-like capabilities and limits associated with generative models suggest the brain may perform recognition and store information using a similar approach. Because of the central role of recognition, progress understanding the underlying principles may reveal significant insight on how to better study and integrate with the brain.Comment: 2 figures, 1 tabl

    Evaluating the Contribution of Top-Down Feedback and Post-Learning Reconstruction

    No full text
    Deep generative models and their associated top-down architecture are gaining popularity in neuroscience and computer vision. In this paper we link our previous work with regulatory feedback networks to generative models. We show that generative model’s and regulatory feedback model’s equations can share the same fixed points. Thus, phenomena observed using regulatory feedback can also apply to generative models. This suggests that generative models can also be developed to identify mixtures of patterns, address problems associated with binding, and display the ability to estimate numerosity
    corecore