16 research outputs found

    A Brief History of Simulation Neuroscience

    Get PDF
    Our knowledge of the brain has evolved over millennia in philosophical, experimental and theoretical phases. We suggest that the next phase is simulation neuroscience. The main drivers of simulation neuroscience are big data generated at multiple levels of brain organization and the need to integrate these data to trace the causal chain of interactions within and across all these levels. Simulation neuroscience is currently the only methodology for systematically approaching the multiscale brain. In this review, we attempt to reconstruct the deep historical paths leading to simulation neuroscience, from the first observations of the nerve cell to modern efforts to digitally reconstruct and simulate the brain. Neuroscience began with the identification of the neuron as the fundamental unit of brain structure and function and has evolved towards understanding the role of each cell type in the brain, how brain cells are connected to each other, and how the seemingly infinite networks they form give rise to the vast diversity of brain functions. Neuronal mapping is evolving from subjective descriptions of cell types towards objective classes, subclasses and types. Connectivity mapping is evolving from loose topographic maps between brain regions towards dense anatomical and physiological maps of connections between individual genetically distinct neurons. Functional mapping is evolving from psychological and behavioral stereotypes towards a map of behaviors emerging from structural and functional connectomes. We show how industrialization of neuroscience and the resulting large disconnected datasets are generating demand for integrative neuroscience, how the scale of neuronal and connectivity maps is driving digital atlasing and digital reconstruction to piece together the multiple levels of brain organization, and how the complexity of the interactions between molecules, neurons, microcircuits and brain regions is driving brain simulation to understand the interactions in the multiscale brain

    Towards Cognitive Bots: Architectural Research Challenges

    Full text link
    Software bots operating in multiple virtual digital platforms must understand the platforms' affordances and behave like human users. Platform affordances or features differ from one application platform to another or through a life cycle, requiring such bots to be adaptable. Moreover, bots in such platforms could cooperate with humans or other software agents for work or to learn specific behavior patterns. However, present-day bots, particularly chatbots, other than language processing and prediction, are far from reaching a human user's behavior level within complex business information systems. They lack the cognitive capabilities to sense and act in such virtual environments, rendering their development a challenge to artificial general intelligence research. In this study, we problematize and investigate assumptions in conceptualizing software bot architecture by directing attention to significant architectural research challenges in developing cognitive bots endowed with complex behavior for operation on information systems. As an outlook, we propose alternate architectural assumptions to consider in future bot design and bot development frameworks

    A graph-based approach for representing, integrating and analysing neuroscience data: the case of the murine basal ganglia

    Get PDF
    Purpose: Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related data sources. These make it difficult for researchers to understand, integrate and reuse brain-related data. The aim of this study is to show that a graph-based approach offers an effective mean for representing, analysing and accessing brain-related data, which is highly interconnected, evolving over time and often needed in combination. Approach: The authors present an approach for organising brain-related data in a graph model. The approach is exemplified in the case of a unique data set of quantitative neuroanatomical data about the murine basal ganglia––a group of nuclei in the brain essential for processing information related to movement. Specifically, the murine basal ganglia data set is modelled as a graph, integrated with relevant data from third-party repositories, published through a Web-based user interface and API, analysed from exploratory and confirmatory perspectives using popular graph algorithms to extract new insights. Findings: The evaluation of the graph model and the results of the graph data analysis and usability study of the user interface suggest that graph-based data management in the neuroscience domain is a promising approach, since it enables integration of various disparate data sources and improves understanding and usability of data. Originality: The study provides a practical and generic approach for representing, integrating, analysing and provisioning brain-related data and a set of software tools to support the proposed approach.acceptedVersio

    Braitenberg Vehicles as Developmental Neurosimulation

    Full text link
    The connection between brain and behavior is a longstanding issue in the areas of behavioral science, artificial intelligence, and neurobiology. Particularly in artificial intelligence research, behavior is generated by a black box approximating the brain. As is standard among models of artificial and biological neural networks, an analogue of the fully mature brain is presented as a blank slate. This model generates outputs and behaviors from a priori associations, yet this does not consider the realities of biological development and developmental learning. Our purpose is to model the development of an artificial organism that exhibits complex behaviors. We will introduce our approach, which is to use Braitenberg Vehicles (BVs) to model the development of an artificial nervous system. The resulting developmental BVs will generate behaviors that range from stimulus responses to group behavior that resembles collective motion. Next, we will situate this work in the domain of artificial brain networks. Then we will focus on broader themes such as embodied cognition, feedback, and emergence. Our perspective will then be exemplified by three software instantiations that demonstrate how a BV-genetic algorithm hybrid model, multisensory Hebbian learning model, and multi-agent approaches can be used to approach BV development. We introduce use cases such as optimized spatial cognition (vehicle-genetic algorithm hybrid model), hinges connecting behavioral and neural models (multisensory Hebbian learning model), and cumulative classification (multi-agent approaches). In conclusion, we will revisit concepts related to our approach and how they might guide future development.Comment: 32 pages, 8 figures, 2 table

    The Dynamical Renaissance in Neuroscience

    Get PDF
    Although there is a substantial philosophical literature on dynamical systems theory in the cognitive sciences, the same is not the case for neuroscience. This paper attempts to motivate increased discussion via a set of overlapping issues. The first aim is primarily historical and is to demonstrate that dynamical systems theory is currently experiencing a renaissance in neuroscience. Although dynamical concepts and methods are becoming increasingly popular in contemporary neuroscience, the general approach should not be viewed as something entirely new to neuroscience. Instead, it is more appropriate to view the current developments as making central again approaches that facilitated some of neuroscience’s most significant early achievements, namely, the Hodgkin-Huxley and FitzHugh-Nagumo models. The second aim is primarily critical and defends a version of the “dynamical hypothesis” in neuroscience. Whereas the original version centered on defending a noncomputational and nonrepresentational account of cognition, the version I have in mind is broader and includes both cognition and the neural systems that realize it as well. In view of that, I discuss research on motor control as a paradigmatic example demonstrating that the concepts and methods of dynamical systems theory are increasingly and successfully being applied to neural systems in contemporary neuroscience. More significantly, such applications are motivating a stronger metaphysical claim, that is, understanding neural systems as being dynamical systems, which includes not requiring appeal to representations to explain or understand those phenomena. Taken together, the historical claim and the critical claim demonstrate that the dynamical hypothesis is undergoing a renaissance in contemporary neuroscience

    Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

    Get PDF
    Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all

    Linking brain structure, activity and cognitive function through computation

    Get PDF
    Understanding the human brain is a “Grand Challenge” for 21st century research. Computational approaches enable large and complex datasets to be addressed efficiently, supported by artificial neural networks, modeling and simulation. Dynamic generative multiscale models, which enable the investigation of causation across scales and are guided by principles and theories of brain function, are instrumental for linking brain structure and function. An example of a resource enabling such an integrated approach to neuroscientific discovery is the BigBrain, which spatially anchors tissue models and data across different scales and ensures that multiscale models are supported by the data, making the bridge to both basic neuroscience and medicine. Research at the intersection of neuroscience, computing and robotics has the potential to advance neuro-inspired technologies by taking advantage of a growing body of insights into perception, plasticity and learning. To render data, tools and methods, theories, basic principles and concepts interoperable, the Human Brain Project (HBP) has launched EBRAINS, a digital neuroscience research infrastructure, which brings together a transdisciplinary community of researchers united by the quest to understand the brain, with fascinating insights and perspectives for societal benefits

    Neuromarketing’s socioeconomic status and racial discrimination and lack of transparency

    Full text link
    Neuromarketing, the scientific study of the nervous system applied to marketing, is evolving rapidly. Marketers recognize that brain and biometric studies lead to a deeper understanding of consumer preferences. This research provides a historically accurate timeline of significant discoveries and a review of present-day neuromarketing tools and corporate case studies. A systematic methodological review was conducted, and findings were synthesized using secondary peer-reviewed research, media reports, and academic studies by marketing associations. This study shows that this area of neuroscience has challenges due to a lack of transparency, minimal standardization, and little to no corroboration among scientists. Additionally, neuromarketing researchers may not address the effects of racism and socioeconomic status on the brain, which, if not considered, can render the sampling results incomplete. With multinational corporations driving the demand for neuromarketing, attaining reliable data from a diverse and robust cross-section of subjects should be prioritized.Accepted manuscrip

    IRM de diffusion par encodage tenseur-b : déconvolution sphérique contrainte et décomposition de la variance diffusionnelle

    Get PDF
    Le cerveau humain est composĂ© de plusieurs milliards de neurones qui forment une multitude de connexions, se regroupant en fibres de matiĂšre blanche (WM) sur plus de 160 000 kilomĂštres au total. L’imagerie par rĂ©sonance magnĂ©tique (IRM) de diffusion (IRMd) tire profit de l’attĂ©nuation du signal de rĂ©sonance magnĂ©tique causĂ©e par la diffusion des molĂ©cules d’eau dans le cerveau pour Ă©tudier ces structures sous-jacentes de maniĂšre non invasive. Le modĂšle d’imagerie par tenseur de diffusion (DTI) permet d’accĂ©der Ă  diffĂ©rentes mesures procurant de l’information sur la mĂ©sostructure du cerveau Ă  partir de donnĂ©es d'IRMd, en manquant cependant de spĂ©cificitĂ© face Ă  la nature microscopique du signal. Le modĂšle de dĂ©convolution sphĂ©rique contrainte (CSD) permet de reconstruire une carte des fonctions de distribution d'orientations de fibres (fODF) de WM dans le cerveau de façon prĂ©cise Ă  partir de l'imagerie de diffusion Ă  haute rĂ©solution angulaire (HARDI), ce qui peut ĂȘtre utilisĂ© par un algorithme de tractographie pour cartographier le connectome structurel humain. Afin de surmonter le manque de spĂ©cificitĂ© prĂ©sent en DTI, l’IRM de diffusion par encodage tenseur-b a vu le jour dans les annĂ©es 2000. Cette technique utilise diffĂ©rents encodages de gradients de diffusion (p. ex., encodages linĂ©aire, planaire et sphĂ©rique) pour donner accĂšs Ă  des mesures plus fines de la structure microscopique des tissus cĂ©rĂ©braux, sous la forme de mesures de microstructure novatrices. Cependant, les donnĂ©es d'IRMd par encodage tenseur-b sont complexes et ne s'appliquent pas directement au modĂšle de CSD, sans compter que l'impact de ces donnĂ©es sur la reconstruction des fODFs est inconnu. Le prĂ©sent mĂ©moire vise donc Ă  Ă©laborer les fondations mathĂ©matiques et techniques d'une CSD adaptĂ©e aux donnĂ©es d'IRMd par encodage tenseur-b. Une Ă©valuation des performances de reconstruction des fODFs par ce modĂšle est ensuite effectuĂ©e sur des donnĂ©es simulĂ©es, en parallĂšle avec des mesures d'efficacitĂ© du calcul des mesures de microstructure. L'Ă©tude rĂ©vĂšle que l'ajout d'un encodage planaire ou sphĂ©rique Ă  un encodage linĂ©aire rĂ©duit de seulement quelques degrĂ©s la rĂ©solution angulaire des fODFs reconstruites. De plus, la combinaison d'encodages linĂ©aire et sphĂ©rique mĂšne Ă  un calcul prĂ©cis des mesures de microstructure. Les rĂ©sultats de ces travaux, incluant la proposition d'un protocole d'IRMd par encodage tenseur-b d'une durĂ©e de 10 minutes et 30 secondes, ouvrent la porte Ă  la reconstruction des fODFs jumelĂ©e au calcul des prĂ©cieuses mesures de microstructure
    corecore