2 research outputs found
Memory Encoding Model
We explore a new class of brain encoding model by adding memory-related
information as input. Memory is an essential brain mechanism that works
alongside visual stimuli. During a vision-memory cognitive task, we found the
non-visual brain is largely predictable using previously seen images. Our
Memory Encoding Model (Mem) won the Algonauts 2023 visual brain competition
even without model ensemble (single model score 66.8, ensemble score 70.8). Our
ensemble model without memory input (61.4) can also stand a 3rd place.
Furthermore, we observe periodic delayed brain response correlated to 6th-7th
prior image, and hippocampus also showed correlated activity timed with this
periodicity. We conjuncture that the periodic replay could be related to memory
mechanism to enhance the working memory
Brain Decodes Deep Nets
We developed a tool for visualizing and analyzing large pre-trained vision
models by mapping them onto the brain, thus exposing their hidden inside. Our
innovation arises from a surprising usage of brain encoding: predicting brain
fMRI measurements in response to images. We report two findings. First,
explicit mapping between the brain and deep-network features across dimensions
of space, layers, scales, and channels is crucial. This mapping method,
FactorTopy, is plug-and-play for any deep-network; with it, one can paint a
picture of the network onto the brain (literally!). Second, our visualization
shows how different training methods matter: they lead to remarkable
differences in hierarchical organization and scaling behavior, growing with
more data or network capacity. It also provides insight into fine-tuning: how
pre-trained models change when adapting to small datasets. We found brain-like
hierarchically organized network suffer less from catastrophic forgetting after
fine-tuned.Comment: Website: see https://huzeyann.github.io/brain-decodes-deep-nets .
Code: see https://github.com/huzeyann/BrainDecodesDeepNet