33 research outputs found

    Addressing missing values in kernel-based multimodal biometric fusion using neutral point substitution

    Get PDF
    In multimodal biometric information fusion, it is common to encounter missing modalities in which matching cannot be performed. As a result, at the match score level, this implies that scores will be missing. We address the multimodal fusion problem involving missing modalities (scores) using support vector machines with the Neutral Point Substitution (NPS) method. The approach starts by processing each modality using a kernel. When a modality is missing, at the kernel level, the missing modality is substituted by one that is unbiased with regards to the classification, called a neutral point. Critically, unlike conventional missing-data substitution methods, explicit calculation of neutral points may be omitted by virtue of their implicit incorporation within the SVM training framework. Experiments based on the publicly available Biosecure DS2 multimodal (scores) data set shows that the SVM-NPS approach achieves very good generalization performance compared to the sum rule fusion, especially with severe missing modalities

    Artificial data.

    No full text
    <p>A) Points without noise and with 1000% of noise () from the simulated data set. B) Covariance tensor to generate samples of the simulated data set is defined as a Kronecker product of covariance matrices and . Matrices and were estimated from experimental data (Section <i>Binary BCI</i>) as correlation matrix for the temporal and frequency modalities in the most informative time interval [0, −1.5] s and frequency band [50, 300] Hz.</p

    Comparison of regression coefficients for different levels of noise.

    No full text
    <p>The mean values and the standard deviations (10 realizations of random noise in the simulated dataset) of the distance between the “true” regression coefficients and RNPLS regression coefficients over the RNPLS recursive iterations (red lines); between the “true” coefficients and regression coefficients generated by the NPLS applied to whole training datasets (blue lines); between the “true” coefficients and regression coefficients generated by INPLS (black lines) and by UPLS (green lines).</p

    <i>p</i>-values of the difference between the quality evaluation criteria of the methods (ANOVA test, significance level <i>α</i> = 0.05).

    No full text
    <p><i>p</i>-values of the difference between the quality evaluation criteria of the methods (ANOVA test, significance level <i>α</i> = 0.05).</p

    The RNPLS, NPLS, INPLS, UPLS and “true” regression coefficients.

    No full text
    <p>Comparison of the regression coefficients averaged over 10 realizations of noise in the simulated dataset (level of noise 1000%). RNPLS, NPLS, INPLS, UPLS and “true” coefficients.</p

    Adjustment of the RNPLS regression coefficients to abrupt changes in observations.

    No full text
    <p>A) The distance (mean and standard deviation) between the RNPLS and “true” coefficients versus the iterations of the RNPLS algorithm in the series of experiments (10 realizations of the simulated dataset, level of noise 1000%). The solution of the RNPLS algorithm is adjusted in 15 iterations to the abrupt changes in observation (at <i>21<sup>st</sup></i> iteration) with the forgetting factor . B) “True” coefficients before after abrupt changes at <i>21<sup>st</sup></i> iteration. C) Example of adjusting the regression coefficients over iterations .</p

    Recursive N-Way Partial Least Squares for Brain-Computer Interface

    Get PDF
    <div><p>In the article tensor-input/tensor-output blockwise Recursive N-way Partial Least Squares (RNPLS) regression is considered. It combines the multi-way tensors decomposition with a consecutive calculation scheme and allows blockwise treatment of tensor data arrays with huge dimensions, as well as the adaptive modeling of time-dependent processes with tensor variables. In the article the numerical study of the algorithm is undertaken. The RNPLS algorithm demonstrates fast and stable convergence of regression coefficients. Applied to Brain Computer Interface system calibration, the algorithm provides an efficient adjustment of the decoding model. Combining the online adaptation with easy interpretation of results, the method can be effectively applied in a variety of multi-modal neural activity flow modeling tasks.</p></div

    Comparison of prediction error RNPLS vs. NPLS.

    No full text
    <p>RMSE as a function of the number of factors for the test dataset. RNPLS (10) – the training set is split into 10-point disjoint subsets, RNPLS (100) – the training dataset is split into 100-point disjoint subsets, NPLS (1000) – generic NPLS using the whole training dataset. Recordings from binary self-paced BCI experiments in freely moving rat.</p

    RNPLS (10) calibration.

    No full text
    <p>A) The first and the second factors (which are the most contributive out of 5): frequency, temporal and spatial projections. The values of elements of the spatial projector are shown in colors according to the color bar and positions of the electrodes are indicated by numbers. B) Factor weights in the final decomposition which are the coefficients of the normalized model in the space of latent variables. C) Impact on the predictive model of different modalities’ components according to MI analysis; the spatial modality is represented by the graph and the corresponding color map. Recordings from binary self-paced BCI experiments in freely moving rat.</p
    corecore