175,595 research outputs found

    The Network Science Of Distributed Representational Systems

    Get PDF
    From brains to science itself, distributed representational systems store and process information about the world. In brains, complex cognitive functions emerge from the collective activity of billions of neurons, and in science, new knowledge is discovered by building on previous discoveries. In both systems, many small individual units—neurons and scientific concepts—interact to inform complex behaviors in the systems they comprise. The patterns in the interactions between units are telling; pairwise interactions not only trivially affect pairs of units, but they also form structural and dynamic patterns with more than just pairs, on a larger scale of the network. Recently, network science adapted methods from graph theory, statistical mechanics, information theory, algebraic topology, and dynamical systems theory to study such complex systems. In this dissertation, we use such cutting-edge methods in network science to study complex distributed representational systems in two domains: cascading neural networks in the domain of neuroscience and concept networks in the domain of science of science. In the domain of neuroscience, the brain is a system that supports complex behavior by storing and processing information from the environment on long time scales. Underlying such behavior is a network of millions of interacting neurons. Many recent studies measure neural activity on the scale of the whole brain with brain regions as units or on the scale of brain regions with individual neurons as units. While many studies have explored the neural correlates of behaviors on these scales, it is less explored how neural activity can be decomposed into low-level patterns. Network science has shown potential to advance our understanding of large-scale brain networks, and here, we apply network science to further our understanding of low-level patterns in small-scale neural networks. Specifically, we explore how the structure and dynamics of biological neural networks support information storage and computation in spontaneous neural activity in slice recordings of rodent brains. Our results illustrate the relationships between network structure, dynamics, and information processing in neural systems. In the domain of science of science, the practice of science itself is a system that discovers and curates information about the physical and social world. For centuries, philosophers, historians, and sociologists of science have theorized about the process and practice of scientific discovery. Recently, the field of science of science has emerged to use a more data-driven approach to quantify the process of science. However, it remains unclear how recent advances in science of science either support or refute the various theories from the philosophies of science. Here, we use a network science approach to operationalize theories from prominent philosophers of science, and we test those theories using networks of hyperlinked articles in Wikipedia, the largest online encyclopedia. Our results support a nuanced view of philosophies of science—that science does not grow outward, as many may intuit, but by filling in gaps in knowledge. In this dissertation, we examine cascading neural networks first in Chapters 2 through 4 and then concept networks in Chapter 5. The studies in Chapters 2 to 4 highlight the role of patterns in the connections of neural networks in storing information and performing computations. The study in Chapter 5 describes patterns in the historical growth of concept networks of scientific knowledge from Wikipedia. Together, these analyses aim to shed light on the network science of distributed representational systems that store and process information about the world

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
    • 

    corecore