12,238 research outputs found

    Channel-Independent and Sensor-Independent Stimulus Representations

    Full text link
    This paper shows how a machine, which observes stimuli through an uncharacterized, uncalibrated channel and sensor, can glean machine-independent information (i.e., channel- and sensor-independent information) about the stimuli. First, we demonstrate that a machine defines a specific coordinate system on the stimulus state space, with the nature of that coordinate system depending on the device's channel and sensor. Thus, machines with different channels and sensors "see" the same stimulus trajectory through state space, but in different machine-specific coordinate systems. For a large variety of physical stimuli, statistical properties of that trajectory endow the stimulus configuration space with differential geometric structure (a metric and parallel transfer procedure), which can then be used to represent relative stimulus configurations in a coordinate-system-independent manner (and, therefore, in a channel- and sensor-independent manner). The resulting description is an "inner" property of the stimulus time series in the sense that it does not depend on extrinsic factors like the observer's choice of a coordinate system in which the stimulus is viewed (i.e., the observer's choice of channel and sensor). This methodology is illustrated with analytic examples and with a numerically simulated experiment. In an intelligent sensory device, this kind of representation "engine" could function as a "front-end" that passes channel/sensor-independent stimulus representations to a pattern recognition module. After a pattern recognizer has been trained in one of these devices, it could be used without change in other devices having different channels and sensors.Comment: The results of a numerically simulated experiment, which illustrates the proposed method, have been added to the version submitted on October 27, 2004. This paper has been accepted for publication in the Journal of Applied Physics. For related papers, see http://www.geocities.com/dlevin2001

    An introduction to time-resolved decoding analysis for M/EEG

    Full text link
    The human brain is constantly processing and integrating information in order to make decisions and interact with the world, for tasks from recognizing a familiar face to playing a game of tennis. These complex cognitive processes require communication between large populations of neurons. The non-invasive neuroimaging methods of electroencephalography (EEG) and magnetoencephalography (MEG) provide population measures of neural activity with millisecond precision that allow us to study the temporal dynamics of cognitive processes. However, multi-sensor M/EEG data is inherently high dimensional, making it difficult to parse important signal from noise. Multivariate pattern analysis (MVPA) or "decoding" methods offer vast potential for understanding high-dimensional M/EEG neural data. MVPA can be used to distinguish between different conditions and map the time courses of various neural processes, from basic sensory processing to high-level cognitive processes. In this chapter, we discuss the practical aspects of performing decoding analyses on M/EEG data as well as the limitations of the method, and then we discuss some applications for understanding representational dynamics in the human brain

    Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal Cortex

    Get PDF
    Signals representing the value assigned to stimuli at the time of choice have been repeatedly observed in ventromedial prefrontal cortex (vmPFC). Yet it remains unknown how these value representations are computed from sensory and memory representations in more posterior brain regions. We used electroencephalography (EEG) while subjects evaluated appetitive and aversive food items to study how event-related responses modulated by stimulus value evolve over time. We found that value-related activity shifted from posterior to anterior, and from parietal to central to frontal sensors, across three major time windows after stimulus onset: 150–250 ms, 400–550 ms, and 700–800 ms. Exploratory localization of the EEG signal revealed a shifting network of activity moving from sensory and memory structures to areas associated with value coding, with stimulus value activity localized to vmPFC only from 400 ms onwards. Consistent with these results, functional connectivity analyses also showed a causal flow of information from temporal cortex to vmPFC. Thus, although value signals are present as early as 150 ms after stimulus onset, the value signals in vmPFC appear relatively late in the choice process, and seem to reflect the integration of incoming information from sensory and memory related regions

    Blind Normalization of Speech From Different Channels

    Full text link
    We show how to construct a channel-independent representation of speech that has propagated through a noisy reverberant channel. This is done by blindly rescaling the cepstral time series by a non-linear function, with the form of this scale function being determined by previously encountered cepstra from that channel. The rescaled form of the time series is an invariant property of it in the following sense: it is unaffected if the time series is transformed by any time-independent invertible distortion. Because a linear channel with stationary noise and impulse response transforms cepstra in this way, the new technique can be used to remove the channel dependence of a cepstral time series. In experiments, the method achieved greater channel-independence than cepstral mean normalization, and it was comparable to the combination of cepstral mean normalization and spectral subtraction, despite the fact that no measurements of channel noise or reverberations were required (unlike spectral subtraction).Comment: 25 pages, 7 figure

    Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    Get PDF
    The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled activation subunits, while the DA was modeled using uncoupled activation subunits. Implementations of DA with coupled subunits, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable - allowing an easy and efficient DA implementation. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods.Comment: 32 text pages, 10 figures, 1 supplementary text + figur

    Temporal characteristics of the influence of punishment on perceptual decision making in the human brain

    Get PDF
    Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing
    • …
    corecore