155,801 research outputs found

    Computational Cognitive Neuroscience

    Get PDF
    This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” (or reverse-engineering) strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic resonance imaging, single-cell recording, and electroencephalography. What sets computational cognitive neuroscience apart, however, is the explanatory role of analytic techniques from disciplines as varied as computer science, statistics, machine learning, and mathematical physics. These techniques serve to describe neural mechanisms computationally, but also to drive the process of scientific discovery by influencing which kinds of mechanisms are most likely to be identified. For this reason, understanding the nature and unique appeal of computational cognitive neuroscience requires not just an understanding of the basic research strategies that are involved, but also of the formal methods and tools that are being deployed, including those of probability theory, dynamical systems theory, and graph theory

    Computational Cognitive Neuroscience

    Get PDF
    This chapter provides an overview of the basic research strategies and analytic techniques deployed in computational cognitive neuroscience. On the one hand, “top-down” (or reverse-engineering) strategies are used to infer, from formal characterizations of behavior and cognition, the computational properties of underlying neural mechanisms. On the other hand, “bottom-up” research strategies are used to identify neural mechanisms and to reconstruct their computational capacities. Both of these strategies rely on experimental techniques familiar from other branches of neuroscience, including functional magnetic resonance imaging, single-cell recording, and electroencephalography. What sets computational cognitive neuroscience apart, however, is the explanatory role of analytic techniques from disciplines as varied as computer science, statistics, machine learning, and mathematical physics. These techniques serve to describe neural mechanisms computationally, but also to drive the process of scientific discovery by influencing which kinds of mechanisms are most likely to be identified. For this reason, understanding the nature and unique appeal of computational cognitive neuroscience requires not just an understanding of the basic research strategies that are involved, but also of the formal methods and tools that are being deployed, including those of probability theory, dynamical systems theory, and graph theory

    Rough Sets: a Bibliometric Analysis from 2014 to 2018

    Get PDF
    Along almost forty years, considerable research has been undertaken on rough set theory to deal with vague information. Rough sets have proven to be extremely helpful for a diversity of computer-science problems (e.g., knowledge discovery, computational logic, machine learning, etc.), and numerous application domains (e.g., business economics, telecommunications, neurosciences, etc.). Accordingly, the literature on rough sets has grown without ceasing, and nowadays it is immense. This paper provides a comprehensive overview of the research published for the last five years. To do so, it analyzes 4,038 records retrieved from the Clarivate Web of Science database, identifying (i) the most prolific authors and their collaboration networks, (ii) the countries and organizations that are leading research on rough sets, (iii) the journals that are publishing most papers, (iv) the topics that are being most researched, and (v) the principal application domains

    Predicting electronic structures at any length scale with machine learning

    Full text link
    The properties of electrons in matter are of fundamental importance. They give rise to virtually all molecular and material properties and determine the physics at play in objects ranging from semiconductor devices to the interior of giant gas planets. Modeling and simulation of such diverse applications rely primarily on density functional theory (DFT), which has become the principal method for predicting the electronic structure of matter. While DFT calculations have proven to be very useful to the point of being recognized with a Nobel prize in 1998, their computational scaling limits them to small systems. We have developed a machine learning framework for predicting the electronic structure on any length scale. It shows up to three orders of magnitude speedup on systems where DFT is tractable and, more importantly, enables predictions on scales where DFT calculations are infeasible. Our work demonstrates how machine learning circumvents a long-standing computational bottleneck and advances science to frontiers intractable with any current solutions. This unprecedented modeling capability opens up an inexhaustible range of applications in astrophysics, novel materials discovery, and energy solutions for a sustainable future

    Network modeling helps to tackle the complexity of drug-disease systems

    Get PDF
    From the (patho)physiological point of view, diseases can be considered as emergent properties of living systems stemming from the complexity of these systems. Complex systems display some typical features, including the presence of emergent behavior and the organization in successive hierarchic levels. Drug treatments increase this complexity scenario, and from some years the use of network models has been introduced to describe drug-disease systems and to make predictions about them with regard to several aspects related to drug discovery. Here, we review some recent examples thereof with the aim to illustrate how network science tools can be very effective in addressing both tasks. We will examine the use of bipartite networks that lead to the important concept of "disease module", as well as the introduction of more articulated models, like multi-scale and multiplex networks, able to describe disease systems at increasing levels of organization. Examples of predictive models will then be discussed, considering both those that exploit approaches purely based on graph theory and those that integrate machine learning methods. A short account of both kinds of methodological applications will be provided. Finally, the point will be made on the present situation of modeling complex drug-disease systems highlighting some open issues.This article is categorized under:Neurological Diseases > Computational ModelsInfectious Diseases > Computational ModelsCardiovascular Diseases > Computational Model

    What Can Artificial Intelligence Do for Scientific Realism?

    Get PDF
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for unconceived alternatives providing modal knowledge of what is possible therein. As a result, the epistemic warrant of synthesised realist theories should emerge bolstered as the underdetermination by available evidence gets reduced. While shifting the realist commitment away from theoretical artefacts towards modalities of the possibility spaces, the synthesis comes out as a kind of perspectival modelling

    Big-Data-Driven Materials Science and its FAIR Data Infrastructure

    Get PDF
    This chapter addresses the forth paradigm of materials research -- big-data driven materials science. Its concepts and state-of-the-art are described, and its challenges and chances are discussed. For furthering the field, Open Data and an all-embracing sharing, an efficient data infrastructure, and the rich ecosystem of computer codes used in the community are of critical importance. For shaping this forth paradigm and contributing to the development or discovery of improved and novel materials, data must be what is now called FAIR -- Findable, Accessible, Interoperable and Re-purposable/Re-usable. This sets the stage for advances of methods from artificial intelligence that operate on large data sets to find trends and patterns that cannot be obtained from individual calculations and not even directly from high-throughput studies. Recent progress is reviewed and demonstrated, and the chapter is concluded by a forward-looking perspective, addressing important not yet solved challenges.Comment: submitted to the Handbook of Materials Modeling (eds. S. Yip and W. Andreoni), Springer 2018/201

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic
    • 

    corecore