3 research outputs found
Recommended from our members
Sensory processing and categorization in cortical and deep neural networks
Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated paradigms like decision-making are less used. Although automated decision-making systems are ubiquitous (driverless cars, pilot support systems, medical diagnosis algorithms etc.), achieving human-level performance in decision making tasks is still a challenge. At the same time, these tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these decision-making tasks and modeling them using deep neural networks could improve AI performance. Here we modelled some of the complex neural interactions during a sensorimotor decision making task. We investigated how brain dynamics flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. This confirmed the validity of our analyses. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Color computations appeared to rely more on sensory processing, while motion computations more on abstract categories. Overall, our results shed light to the biological basis of categorization and differences in selectivity and computations in different brain areas. They also suggest a way for studying sensory and categorical representations in the brain: compare brain responses to both a behavioral model and a deep neural network and test if they give similar results
Recommended from our members
In vivo ephaptic coupling allows memory network formation
It is increasingly clear that memories are distributed across multiple brain areas. Such “engram complexes” are important features of memory formation and consolidation. Here, we test the hypothesis that engram complexes are formed in part by bioelectric fields that sculpt and guide the neural activity and tie together the areas that participate in engram complexes. Like the conductor of an orchestra, the fields influence each musician or neuron and orchestrate the output, the symphony. Our results use the theory of synergetics, machine learning and data from a spatial delayed saccade task and provide evidence for in vivo ephaptic coupling in memory representations
Sensory processing and categorization in cortical and deep neural networks
Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated paradigms like decision-making are less used. Although automated decision-making systems are ubiquitous (driverless cars, pilot support systems, medical diagnosis algorithms etc.), achieving human-level performance in decision making tasks is still a challenge. At the same time, these tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these decision-making tasks and modeling them using deep neural networks could improve AI performance. Here we modelled some of the complex neural interactions during a sensorimotor decision making task. We investigated how brain dynamics flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. This confirmed the validity of our analyses. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Color computations appeared to rely more on sensory processing, while motion computations more on abstract categories. Overall, our results shed light to the biological basis of categorization and differences in selectivity and computations in different brain areas. They also suggest a way for studying sensory and categorical representations in the brain: compare brain responses to both a behavioral model and a deep neural network and test if they give similar results.National Institute of Mental Health (U.S.) (Grant R37MH087027