482 research outputs found

    Structure and Dynamics of Brain Lobes Functional Networks at the Onset of Anesthesia Induced Loss of Consciousness

    Full text link
    Anesthetic agents are neurotropic drugs able to induce dramatic alterations in the thalamo-cortical system, promoting a drastic reduction in awareness and level of consciousness. There is experimental evidence that general anesthesia impacts large scale functional networks leading to alterations in the brain state. However, the way anesthetics affect the structure assumed by functional connectivity in different brain regions have not been reported yet. Within this context, the present study has sought to characterize the functional brain networks respective to the frontal, parietal, temporal and occipital lobes. In this experiment, electro-physiological neural activity was recorded through the use of a dense ECoG-electrode array positioned directly over the cortical surface of an old world monkey of the species Macaca fuscata. Networks were serially estimated over time at each five seconds, while the animal model was under controlled experimental conditions of an anesthetic induction process. In each one of the four cortical brain lobes, prominent alterations on distinct properties of the networks evidenced a transition in the networks architecture, which occurred within about one and a half minutes after the administration of the anesthetics. The characterization of functional brain networks performed in this study represents important experimental evidence and brings new knowledge towards the understanding of neural correlates of consciousness in terms of the structure and properties of the functional brain networks.Comment: 41 pages; 30 figures; 30 tables. arXiv admin note: substantial text overlap with arXiv:1604.0000

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Neural synchrony in cortical networks : history, concept and current status

    Get PDF
    Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies

    Neural synchrony in cortical networks : history, concept and current status

    Get PDF
    Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies

    What It Is To Be Conscious: Exploring the Plasibility of Consciousness in Deep Learning Computers

    Get PDF
    As artificial intelligence and robotics progress further and faster every day, designing and building a conscious computer appears to be on the horizon. Recent technological advances have allowed engineers and computer scientists to create robots and computer programs that were previously impossible. The development of these highly sophisticated robots and AI programs has thus prompted the age-old question: can a computer be conscious? The answer relies on addressing two key sub-problems. The first is the nature of consciousness: what constitutes a system as conscious, or what properties does consciousness have? Secondly, does the physical make-up of the robot or computer matter? Is there a particular composition of the robot or computer that is necessary for consciousness, or is consciousness unaffected by differences in physical properties? My aim is to explore these issues with respect to deep-learning computer programs. These programs use artificial neural networks and learning algorithms to create highly sophisticated, seemingly intelligent computers that are comparable to, yet fundamentally different from, a human brain. Additionally, I will discuss the required actions we must take in order to come to a consensus on the consciousness of deep learning computers

    Spectral Modes of Network Dynamics Reveal Increased Informational Complexity Near Criticality

    Full text link
    What does the informational complexity of dynamical networked systems tell us about intrinsic mechanisms and functions of these complex systems? Recent complexity measures such as integrated information have sought to operationalize this problem taking a whole-versus-parts perspective, wherein one explicitly computes the amount of information generated by a network as a whole over and above that generated by the sum of its parts during state transitions. While several numerical schemes for estimating network integrated information exist, it is instructive to pursue an analytic approach that computes integrated information as a function of network weights. Our formulation of integrated information uses a Kullback-Leibler divergence between the multi-variate distribution on the set of network states versus the corresponding factorized distribution over its parts. Implementing stochastic Gaussian dynamics, we perform computations for several prototypical network topologies. Our findings show increased informational complexity near criticality, which remains consistent across network topologies. Spectral decomposition of the system's dynamics reveals how informational complexity is governed by eigenmodes of both, the network's covariance and adjacency matrices. We find that as the dynamics of the system approach criticality, high integrated information is exclusively driven by the eigenmode corresponding to the leading eigenvalue of the covariance matrix, while sub-leading modes get suppressed. The implication of this result is that it might be favorable for complex dynamical networked systems such as the human brain or communication systems to operate near criticality so that efficient information integration might be achieved

    Nature of Intelligence

    Full text link
    The human brain is the substrate for human intelligence. By simulating the human brain, artificial intelligence builds computational models that have learning capabilities and perform intelligent tasks approaching the human level. Deep neural networks consist of multiple computation layers to learn representations of data and improve the state-of-the-art in many recognition domains. However, the essence of intelligence commonly represented by both humans and AI is unknown. Here, we show that the nature of intelligence is a series of mathematically functional processes that minimize system entropy by establishing functional relationships between datasets over space and time. Humans and AI have achieved intelligence by implementing these entropy-reducing processes in a reinforced manner that consumes energy. With this hypothesis, we establish mathematical models of language, unconsciousness and consciousness, predicting the evidence to be found by neuroscience and achieved by AI engineering. Furthermore, a conclusion is made that the total entropy of the universe is conservative, and intelligence counters the spontaneous processes to decrease entropy by physically or informationally connecting datasets that originally exist in the universe but are separated across space and time. This essay should be a starting point for a deeper understanding of the universe and us as human beings and for achieving sophisticated AI models that are tantamount to human intelligence or even superior. Furthermore, this essay argues that more advanced intelligence than humans should exist if only it reduces entropy in a more efficient energy-consuming way

    Perceptual and Conceptual Content of Human Consciousness – A Perspective of the Philosophy of Mind

    Get PDF
    The relation of the perceptual and the conceptual aspect of human mental states and process is discussed in light of some recent discussions. Several philosophical arguments for and against the conclusion that perceptual content is a non-conceptual type of representation are presented and critically assessed. The possibility of an objective criterion for resolving the issue, independent of introspective reports and intuitive conjectures, is considered
    corecore