122 research outputs found
Comparison of BEAMFORMER, MUSIC, RAP-MUSIC (2<sup>nd</sup> recursion) and INN with two simulated sources with a large strength difference (the posterior source is 5 times weaker than the anterior one).
<p>The true source locations and the xy-scales are the same as in Fig. 2 (10 mm distance between the sources). SNR is 3. The two sources were correlated with a correlation coefficient of 0.3, 0.8, 0.95 or 0.99. The black crosshairs indicate the true source locations and the blue circles the local maxima.</p
MEG Source Localization Using Invariance of Noise Space
<div><p>We propose INvariance of Noise (INN) space as a novel method for source localization of magnetoencephalography (MEG) data. The method is based on the fact that modulations of source strengths across time change the energy in signal subspace but leave the noise subspace invariant. We compare INN with classical MUSIC, RAP-MUSIC, and beamformer approaches using simulated data while varying signal-to-noise ratios as well as distance and temporal correlation between two sources. We also demonstrate the utility of INN with actual auditory evoked MEG responses in eight subjects. In all cases, INN performed well, especially when the sources were closely spaced, highly correlated, or one source was considerably stronger than the other.</p> </div
The selection of reliable samples and the corresponding accuracy of BLDA.
<p>The number of selected reliable samples and the corresponding classification accuracy when probability threshold is kept to be 0.90 for the 5 subjects. The plot is derived from one of 5×10-fold cross validation for the five subjects. The red bar using the scale of the left axis represents the number of trials selected for expanding training set; the gray one using the scale of the right axis is the classification accuracy when the reliable samples are used for classifier training.</p
Localization of Class 2 real auditory evoked responses using the different methods.
<p>The red points indicate the peaks of the cost functions. The threshold was set to 80% of peak of the cost function within the corresponding source region. To show the underlying anatomical structure, the transparency of the overlaid images was set to 50%. INN identified sources at the left and right supratemporal auditory cortices. MUSIC and BEAMFORMER only detected a source in the left auditory cortex. RAP-MUSIC (2<sup>nd</sup> recursion) misplaced a false source at midline.</p
Activated regions of different imaging tasks of Subject K1 for the four motor imaging tasks, left hand, right hand, foot and tongue movements.
<p>The values (<i>r<sup>2</sup></i>) in the figures are calculated according to equation (15). The optimal discrimination channels of different tasks were found to be located at C4 for left hand, C3 for right hand, Cz for foot and CP6 for tongue, respectively.</p
The Kappa coefficients of 5×10-fold cross validations with BLDA and EBLDA for the experiment dataset.
<p>The feature vectors are obtained by one-versus-one CSP methods. The performance of BLDA and EBLDA classification methods are estimated with different training sizes.</p
Localization of Class 3 real auditory evoked responses using different methods.
<p>The red points indicate the peaks of the cost functions. The threshold was set to 80% of peak of the cost function within the corresponding source region. To show the underlying anatomical structure, the transparency of the overlaid images was set to 50%. MUSIC found supratemporal sources in both hemispheres but also an additional spurious source in the midline. For MUSIC, the right hemisphere source was strongest (normalized maximum cost function value = 1), followed by the midline (0.83) and left hemisphere (0.63) sources. BEAMFORMER found bilateral sources that were rather deep in white matter and an additional spurious source in midline. RAP-MUSIC (2<sup>nd</sup> recursion) found one source in the left temporal lobe since the right hemisphere source was suppressed. Again, INN identifed sources at the left and right supratemporal auditory cortices, in agreement with previous knowledge.</p
The scalp distributions of the CSP filters for subject K1 with performing right and left motor imagery.
<p>The two filters are defined by the largest and smallest eigenvalues in CSP decomposition, and the evoked ERDs for these two tasks can be reflected by the scalp distributions of the two CSP filters.</p
Comparison of BEAMFORMER, MUSIC, RAP-MUSIC (2<sup>nd</sup> recursion) and INN with two widely (90 mm) separated simulated sources at multiple SNRs and correlation coefficients.
<p>The true source locations are (−5, 45, 40) mm and (−5, −45, 40) mm. In the first four columns SNR varies at 0.5–2 and <i>r</i><sup>2</sup> = 0.99; in the rightmost column SNR is 0.5 and <i>r</i><sup>2</sup> = 0.9. The white crosshairs indicate the true source locations and the blue circles the local maxima. The left bottom corner shows the x- and y-axis scales.</p
INN images for two simulated sources as a function of the dimensions of the selected noise subspace (DIM), SNR, and correlation coefficient <i>r</i><sup>2</sup>.
<p>For each fixed SNR and <i>r</i><sup>2</sup>, DIM values were between 271 (leftmost column) and 22 (rightmost column). The true source locations and the xy-plane axis scales are the same as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0058408#pone-0058408-g002" target="_blank">Figure 2</a> (10 mm apart). Values of SNR and <i>r</i><sup>2</sup> are shown to the right.</p
- …