4,872 research outputs found

    Comparative power spectral analysis of simultaneous elecroencephalographic and magnetoencephalographic recordings in humans suggests non-resistive extracellular media

    Get PDF
    The resistive or non-resistive nature of the extracellular space in the brain is still debated, and is an important issue for correctly modeling extracellular potentials. Here, we first show theoretically that if the medium is resistive, the frequency scaling should be the same for electroencephalogram (EEG) and magnetoencephalogram (MEG) signals at low frequencies (<10 Hz). To test this prediction, we analyzed the spectrum of simultaneous EEG and MEG measurements in four human subjects. The frequency scaling of EEG displays coherent variations across the brain, in general between 1/f and 1/f^2, and tends to be smaller in parietal/temporal regions. In a given region, although the variability of the frequency scaling exponent was higher for MEG compared to EEG, both signals consistently scale with a different exponent. In some cases, the scaling was similar, but only when the signal-to-noise ratio of the MEG was low. Several methods of noise correction for environmental and instrumental noise were tested, and they all increased the difference between EEG and MEG scaling. In conclusion, there is a significant difference in frequency scaling between EEG and MEG, which can be explained if the extracellular medium (including other layers such as dura matter and skull) is globally non-resistive.Comment: Submitted to Journal of Computational Neuroscienc

    Image formation in synthetic aperture radio telescopes

    Full text link
    Next generation radio telescopes will be much larger, more sensitive, have much larger observation bandwidth and will be capable of pointing multiple beams simultaneously. Obtaining the sensitivity, resolution and dynamic range supported by the receivers requires the development of new signal processing techniques for array and atmospheric calibration as well as new imaging techniques that are both more accurate and computationally efficient since data volumes will be much larger. This paper provides a tutorial overview of existing image formation techniques and outlines some of the future directions needed for information extraction from future radio telescopes. We describe the imaging process from measurement equation until deconvolution, both as a Fourier inversion problem and as an array processing estimation problem. The latter formulation enables the development of more advanced techniques based on state of the art array processing. We demonstrate the techniques on simulated and measured radio telescope data.Comment: 12 page

    Feynman integrals and motives

    Get PDF
    This article gives an overview of recent results on the relation between quantum field theory and motives, with an emphasis on two different approaches: a "bottom-up" approach based on the algebraic geometry of varieties associated to Feynman graphs, and a "top-down" approach based on the comparison of the properties of associated categorical structures. This survey is mostly based on joint work of the author with Paolo Aluffi, along the lines of the first approach, and on previous work of the author with Alain Connes on the second approach.Comment: 32 pages LaTeX, 3 figures, to appear in the Proceedings of the 5th European Congress of Mathematic

    Exploration and Optimization of Noise Reduction Algorithms for Speech Recognition in Embedded Devices

    Get PDF
    Environmental noise present in real-life applications substantially degrades the performance of speech recognition systems. An example is an in-car scenario where a speech recognition system has to support the man-machine interface. Several sources of noise coming from the engine, wipers, wheels etc., interact with speech. Special challenge is given in an open window scenario, where noise of traffic, park noise, etc., has to be regarded. The main goal of this thesis is to improve the performance of a speech recognition system based on a state-of-the-art hidden Markov model (HMM) using noise reduction methods. The performance is measured with respect to word error rate and with the method of mutual information. The noise reduction methods are based on weighting rules. Least-squares weighting rules in the frequency domain have been developed to enable a continuous development based on the existing system and also to guarantee its low complexity and footprint for applications in embedded devices. The weighting rule parameters are optimized employing a multidimensional optimization task method of Monte Carlo followed by a compass search method. Root compression and cepstral smoothing methods have also been implemented to boost the recognition performance. The additional complexity and memory requirements of the proposed system are minimum. The performance of the proposed system was compared to the European Telecommunications Standards Institute (ETSI) standardized system. The proposed system outperforms the ETSI system by up to 8.6 % relative increase in word accuracy and achieves up to 35.1 % relative increase in word accuracy compared to the existing baseline system on the ETSI Aurora 3 German task. A relative increase of up to 18 % in word accuracy over the existing baseline system is also obtained from the proposed weighting rules on large vocabulary databases. An entropy-based feature vector analysis method has also been developed to assess the quality of feature vectors. The entropy estimation is based on the histogram approach. The method has the advantage to objectively asses the feature vector quality regardless of the acoustic modeling assumption used in the speech recognition system
    • …
    corecore