3,048 research outputs found

    Exploring Cognitive States: Methods for Detecting Physiological Temporal Fingerprints

    Get PDF
    Cognitive state detection and its relationship to observable physiologically telemetry has been utilized for many human-machine and human-cybernetic applications. This paper aims at understanding and addressing if there are unique psychophysiological patterns over time, a physiological temporal fingerprint, that is associated with specific cognitive states. This preliminary work involves commercial airline pilots completing experimental benchmark task inductions of three cognitive states: 1) Channelized Attention (CA); 2) High Workload (HW); and 3) Low Workload (LW). We approach this objective by modeling these "fingerprints" through the use of Hidden Markov Models and Entropy analysis to evaluate if the transitions over time are complex or rhythmic/predictable by nature. Our results indicate that cognitive states do have unique complexity of physiological sequences that are statistically different from other cognitive states. More specifically, CA has a significantly higher temporal psychophysiological complexity than HW and LW in EEG and ECG telemetry signals. With regards to respiration telemetry, CA has a lower temporal psychophysiological complexity than HW and LW. Through our preliminary work, addressing this unique underpinning can inform whether these underlying dynamics can be utilized to understand how humans transition between cognitive states and for improved detection of cognitive states

    Diffusion of Context and Credit Information in Markovian Models

    Full text link
    This paper studies the problem of ergodicity of transition probability matrices in Markovian models, such as hidden Markov models (HMMs), and how it makes very difficult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm.Comment: See http://www.jair.org/ for any accompanying file

    The Computational Structure of Spike Trains

    Full text link
    Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.Comment: Somewhat different format from journal version but same conten
    corecore