52 research outputs found

    To bee or not to bee?

    Get PDF
    Klein & Barron’s (2016) (K & B’s) case for insect consciousness is a welcome development in an area that, in all of the science and philosophy of mind, is probably the most anthropocentric. In this commentary, we seek to strengthen K & B’s side of the argument by appealing not just to putative neural mechanisms but also to computational theory that supports it (section 1). We also offer some remarks on three distinctions that are relevant to K & B’s thesis and are central to phenomenal awareness: between the capacity for awareness and its contents (section 2); between awareness and selfhood (section 3); and between “easy” and “hard” problems in consciousness research (section 4)

    System, Subsystem, Hive: boundary problems in computational theories of consciousness

    Get PDF
    A computational theory of consciousness should include a quantitative measure of consciousness, or MoC, that (i) would reveal to what extent a given system is conscious, (ii) would make it possible to compare not only different systems, but also the same system at different times, and (iii) would be graded, because so is consciousness. However, unless its design is properly constrained, such an MoC gives rise to what we call the boundary problem: an MoC that labels a system as conscious will do so for some – perhaps most – of its subsystems, as well as for irrelevantly extended systems (e.g., the original system augmented with physical appendages that contribute nothing to the properties supposedly supporting consciousness), and for aggregates of individually conscious systems (e.g., groups of people). This problem suggests that the properties that are being measured are epiphenomenal to consciousness, or else it implies a bizarre proliferation of minds. We propose that a solution to the boundary problem can be found by identifying properties that are intrinsic or systemic: properties that clearly differentiate between systems whose existence is a matter of fact, as opposed to those whose existence is a matter of interpretation (in the eye of the beholder). We argue that if a putative MoC can be shown to be systemic, this ipso facto resolves any associated boundary issues. As test cases, we analyze two recent theories of consciousness in light of our definitions: the Integrated Information Theory and the Geometric Theory of consciousness

    Multi-Electrode Alpha tACS During Varying Background Tasks Fails to Modulate Subsequent Alpha Power

    Get PDF
    Transcranial alternating-current stimulation (tACS) for entraining alpha activity holds potential for influencing mental function, both in laboratory and clinical settings. While initial results of alpha entrainment are promising, questions remain regarding its translational potential—namely if tACS alpha entrainment is sufficiently robust to context and to what extent it can be upscaled to multi-electrode arrangements needed to direct currents into precise brain loci. We set out to explore these questions by administering alternating current through a multi-electrode montage (mtACS), while varying background task. A multi-electrode analog of previously employed anterior/posterior stimulation failed to replicate the reported alpha entrainment, suggesting that further work is required to understand the scope of applicability of tACS alpha entrainment

    The NIRS Analysis Package: Noise Reduction and Statistical Inference

    Get PDF
    Near infrared spectroscopy (NIRS) is a non-invasive optical imaging technique that can be used to measure cortical hemodynamic responses to specific stimuli or tasks. While analyses of NIRS data are normally adapted from established fMRI techniques, there are nevertheless substantial differences between the two modalities. Here, we investigate the impact of NIRS-specific noise; e.g., systemic (physiological), motion-related artifacts, and serial autocorrelations, upon the validity of statistical inference within the framework of the general linear model. We present a comprehensive framework for noise reduction and statistical inference, which is custom-tailored to the noise characteristics of NIRS. These methods have been implemented in a public domain Matlab toolbox, the NIRS Analysis Package (NAP). Finally, we validate NAP using both simulated and actual data, showing marked improvement in the detection power and reliability of NIRS

    Optimizing complexity measures for FMRI data: algorithm, artifact, and sensitivity.

    Get PDF
    INTRODUCTION: Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. METHODS: HERE WE USE BOTH SIMULATED AND REAL DATA TO ADDRESS TWO FUNDAMENTAL ISSUES: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi's estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. RESULTS: Power-spectrum, Higuchi's fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. CONCLUSIONS: Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates

    The representational capacity of cortical tissue

    No full text

    Multiscale criticality measures as general-purpose gauges of proper brain function

    No full text
    International audienceThe brain is universally regarded as a system for processing information. If so, any behavioral or cognitive dysfunction should lend itself to depiction in terms of information processing deficiencies. Information is characterized by recursive, hierarchical complexity. The brain accommodates this complexity by a hierarchy of large/slow and small/fast spatiotemporal loops of activity. Thus, successful information processing hinges upon tightly regulating the spatiotemporal makeup of activity, to optimally match the underlying multiscale delay structure of such hierarchical networks. Reduced capacity for information processing will then be expressed as deviance from this requisite multiscale character of spatiotemporal activity. This deviance is captured by a general family of multiscale criticality measures (MsCr). MsCr measures reflect the behavior of conventional criticality measures (such as the branching parameter) across temporal scale. We applied MsCr to MEG and EEG data in several telling degraded information processing scenarios. Consistently with our previous modeling work, MsCr measures systematically varied with information processing capacity: MsCr fingerprints showed deviance in the four states of compromised information processing examined in this study, disorders of consciousness, mild cognitive impairment, schizophrenia and even during preictal activity. MsCr measures might thus be able to serve as general gauges of information processing capacity and, therefore, as normative measures of brain health
    • 

    corecore