7 research outputs found

    Domain Adaptation for Robust Workload Level Alignment Between Sessions and Subjects using fNIRS

    Full text link
    Significance: We demonstrated the potential of using domain adaptation on functional Near-Infrared Spectroscopy (fNIRS) data to classify different levels of n-back tasks that involve working memory. Aim: Domain shift in fNIRS data is a challenge in the workload level alignment across different experiment sessions and subjects. In order to address this problem, two domain adaptation approaches -- Gromov-Wasserstein (G-W) and Fused Gromov-Wasserstein (FG-W) were used. Approach: Specifically, we used labeled data from one session or one subject to classify trials in another session (within the same subject) or another subject. We applied G-W for session-by-session alignment and FG-W for subject-by-subject alignment to fNIRS data acquired during different n-back task levels. We compared these approaches with three supervised methods: multi-class Support Vector Machine (SVM), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN). Results: In a sample of six subjects, G-W resulted in an alignment accuracy of 68 ±\pm 4 % (weighted mean ±\pm standard error) for session-by-session alignment, FG-W resulted in an alignment accuracy of 55 ±\pm 2 % for subject-by-subject alignment. In each of these cases, 25 % accuracy represents chance. Alignment accuracy results from both G-W and FG-W are significantly greater than those from SVM, CNN and RNN. We also showed that removal of motion artifacts from the fNIRS data plays an important role in improving alignment performance. Conclusions: Domain adaptation has potential for session-by-session and subject-by-subject alignment of mental workload by using fNIRS data

    Transient Artifact Reduction Algorithm (TARA) Based on Sparse Optimization

    No full text

    Unsupervised Learning from Shollow to Deep

    Get PDF
    Machine learning plays a pivotal role in most state-of-the-art systems in many application research domains. With the rising of deep learning, massive labeled data become the solution of feature learning, which enables the model to learn automatically. Unfortunately, the trained deep learning model is hard to adapt to other datasets without fine-tuning, and the applicability of machine learning methods is limited by the amount of available labeled data. Therefore, the aim of this thesis is to alleviate the limitations of supervised learning by exploring algorithms to learn good internal representations, and invariant feature hierarchies from unlabelled data. Firstly, we extend the traditional dictionary learning and sparse coding algorithms onto hierarchical image representations in a principled way. To achieve dictionary atoms capture additional information from extended receptive fields and attain improved descriptive capacity, we present a two-pass multi-resolution cascade framework for dictionary learning and sparse coding. This cascade method allows collaborative reconstructions at different resolutions using only the same dimensional dictionary atoms. The jointly learned dictionary comprises atoms that adapt to the information available at the coarsest layer, where the support of atoms reaches a maximum range, and the residual images, where the supplementary details refine progressively a reconstruction objective. Our method generates flexible and accurate representations using only a small number of coefficients, and is efficient in computation. In the following work, we propose to incorporate the traditional self-expressiveness property into deep learning to explore better representation for subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the ``self-expressiveness'' property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. However, Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. We propose two methods to tackle this problem. One method is based on kk-Subspace Clustering, where we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. This in turn frees us from the need of having an affinity matrix to perform clustering. The other way starts from using a feed forward network to replace the spectral clustering and learn the affinities of each data from "self-expressive" layer. We introduce the Neural Collaborative Subspace Clustering, where it benefits from a classifier which determines whether a pair of points lies on the same subspace under supervision of "self-expressive" layer. Essential to our model is the construction of two affinity matrices, one from the classifier and the other from a notion of subspace self-expressiveness, to supervise training in a collaborative scheme. In summary, we make constributions on how to perform the unsupervised learning in several tasks in this thesis. It starts from traditional sparse coding and dictionary learning perspective in low-level vision. Then, we exploit how to incorporate unsupervised learning in convolutional neural networks without label information and make subspace clustering to large scale dataset. Furthermore, we also extend the clustering on dense prediction task (saliency detection)

    Influence of Early Bilingual Exposure in the Developing Human Brain.

    Get PDF
    190 p.La adquisición del lenguaje es un proceso que ese encuentra determinado tanto por mecanismos de desarrollo cognitivo, como por la experiencia lingüística durante los primeros años de vida. Aunque se trata de un proceso relativamente complejo, los bebés muestran una gran habilidad para el aprendizaje del lenguaje. Un entorno de aprendizaje lingüístico bilingüe podría considerarse aun más complejo, ya que los bebés están expuestos a las características lingüísticas de dos lenguas simultáneamente. En primer lugar, los bebés que crecen en un entorno bilingüe tienen que ser capaces de darse cuenta de que están expuestos a dos lenguas diferentes, y posteriormente deben separar y aprender las características especificas de cada una de ellas; por ejemplo, los distintos fonemas, palabras o estructuras gramaticales. Aunque la exposición lingüística total de los bebés bilingües debería ser comparable a la de los bebés monolingües, es probable que la exposición a cada una de las lenguas de su entorno sea menor, ya que tienen que dividir su tiempo de exposición entre ambas. Si bien los bebés bilingües parecen no tener problemas para enfrentarse a un contexto de aprendizaje potencialmente más complejo, ya que alcanzan las distintas etapas de adquisición del lenguaje a un ritmo similar a los bebés monolingües, sí se han observado adaptaciones a nivel conductual y a nivel de funcionamiento cerebral que podrían producirse como consecuencia de este contexto.Basque Center on cognition, brain and languag

    Influence of Early Bilingual Exposure in the Developing Human Brain.

    Get PDF
    190 p.La adquisición del lenguaje es un proceso que ese encuentra determinado tanto por mecanismos de desarrollo cognitivo, como por la experiencia lingüística durante los primeros años de vida. Aunque se trata de un proceso relativamente complejo, los bebés muestran una gran habilidad para el aprendizaje del lenguaje. Un entorno de aprendizaje lingüístico bilingüe podría considerarse aun más complejo, ya que los bebés están expuestos a las características lingüísticas de dos lenguas simultáneamente. En primer lugar, los bebés que crecen en un entorno bilingüe tienen que ser capaces de darse cuenta de que están expuestos a dos lenguas diferentes, y posteriormente deben separar y aprender las características especificas de cada una de ellas; por ejemplo, los distintos fonemas, palabras o estructuras gramaticales. Aunque la exposición lingüística total de los bebés bilingües debería ser comparable a la de los bebés monolingües, es probable que la exposición a cada una de las lenguas de su entorno sea menor, ya que tienen que dividir su tiempo de exposición entre ambas. Si bien los bebés bilingües parecen no tener problemas para enfrentarse a un contexto de aprendizaje potencialmente más complejo, ya que alcanzan las distintas etapas de adquisición del lenguaje a un ritmo similar a los bebés monolingües, sí se han observado adaptaciones a nivel conductual y a nivel de funcionamiento cerebral que podrían producirse como consecuencia de este contexto.Basque Center on cognition, brain and languag
    corecore