965 research outputs found

    Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : a decoding approach/

    Get PDF
    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations

    Multimodal emotion recognition

    Get PDF
    Reading emotions from facial expression and speech is a milestone in Human-Computer Interaction. Recent sensing technologies, namely the Microsoft Kinect Sensor, provide basic input modalities data, such as RGB imaging, depth imaging and speech, that can be used in Emotion Recognition. Moreover Kinect can track a face in real time and present the face fiducial points, as well as 6 basic Action Units (AUs). In this work we explore this information by gathering a new and exclusive dataset. This is a new opportunity for the academic community as well to the progress of the emotion recognition problem. The database includes RGB, depth, audio, fiducial points and AUs for 18 volunteers for 7 emotions. We then present automatic emotion classification results on this dataset by employing k-Nearest Neighbor, Support Vector Machines and Neural Networks classifiers, with unimodal and multimodal approaches. Our conclusions show that multimodal approaches can attain better results.Ler e reconhecer emoções de expressões faciais e verbais é um marco na Interacção Humana com um Computador. As recentes tecnologias de deteção, nomeadamente o sensor Microsoft Kinect, recolhem dados de modalidades básicas como imagens RGB, de informaçãode profundidade e defala que podem ser usados em reconhecimento de emoções. Mais ainda, o sensor Kinect consegue reconhecer e seguir uma cara em tempo real e apresentar os pontos fiduciais, assim como as 6 AUs – Action Units básicas. Neste trabalho exploramos esta informação através da compilação de um dataset único e exclusivo que representa uma oportunidade para a comunidade académica e para o progresso do problema do reconhecimento de emoções. Este dataset inclui dados RGB, de profundidade, de fala, pontos fiduciais e AUs, para 18 voluntários e 7 emoções. Apresentamos resultados com a classificação automática de emoções com este dataset, usando classificadores k-vizinhos próximos, máquinas de suporte de vetoreseredes neuronais, em abordagens multimodais e unimodais. As nossas conclusões indicam que abordagens multimodais permitem obter melhores resultados

    Influence Of Predicate Sense On Word Order In Sign Languages: Intensional And Extensional Verbs

    Get PDF
    We present evidence for the influence of semantics on the order of subject, object, and verb in Brazilian Sign Language (Libras) sentences. While some have argued for a prevailing pattern of SVO in Libras, we find a strong tendency for this order in sentences that do not presuppose the existence of the verb’s object, but not in sentences that do, which instead favor SOV. These findings are coherent with those of a recent study on gesture. We argue that the variable influence of the relevant predicates is particularly salient in sign languages, due to the iconic nature of the visual modality

    Knowledge modeling of phishing emails

    Get PDF
    This dissertation investigates whether or not malicious phishing emails are detected better when a meaningful representation of the email bodies is available. The natural language processing theory of Ontological Semantics Technology is used for its ability to model the knowledge representation present in the email messages. Known good and phishing emails were analyzed and their meaning representations fed into machine learning binary classifiers. Unigram language models of the same emails were used as a baseline for comparing the performance of the meaningful data. The end results show how a binary classifier trained on meaningful data is better at detecting phishing emails than a unigram language model binary classifier at least using some of the selected machine learning algorithms

    Deteção automática de lesões de esclerose múltipla em imagens de ressonância magnética cerebral utilizando BIANCA

    Get PDF
    The aim of this work was to design and optimize a workflow to apply the Machine Learning classifier BIANCA (Brain Intensity AbNormalities Classification Algorithm) to detect lesions characterized by white matter T2 hyperintensity in clinical Magnetic Resonance Multiple Sclerosis datasets. The designed pipeline includes pre-processing, lesion identification and optimization of BIANCA options. The classifier has been trained and tuned on 15 cases making up the training dataset of the MICCAI 2016 (Medical Image Computing and Computer Assisted Interventions) challenge and then tested on 30 cases from the Lesjak et al. public dataset. The results obtained are in good agreement with those reported by the 13 teams concluding the MICCAI 2016 challenge, thus confirming that this algorithm can be a reliable tool to detect and classify Multiple Sclerosis lesions in Magnetic Resonance studies.Este trabalho teve como objetivo a conceção e otimização de um procedimento para aplicação de um algoritmo de Machine Learning, o classificador BIANCA (Brain Intensity AbNormalities Classification Algorithm), para deteção de lesões caracterizadas por hiperintensidade em T2 da matéria branca em estudos clínicos de Esclerose Múltipla por Ressonância Magnética. O procedimento concebido inclui pré-processamento, identificação das lesões e otimização dos parâmetros do algoritmo BIANCA. O classificador foi treinado e afinado utilizando os 15 casos clínicos que constituíam o conjunto de treino do desafio MICCAI 2016 (Medical Image Computing and Computer Assisted Interventions) e posteriormente testado em 30 casos clínicos de uma base de dados pública (Lesjak et al.). Os resultados obtidos são em concordância com os alcançados pelas 13 equipas que concluíram o desafio MICCAI 2016, confirmando que este algoritmo pode ser uma ferramenta válida para a deteção e classificação de lesões de Esclerose Múltipla em estudos de Ressonância Magnética.Mestrado em Tecnologias da Imagem Médic
    corecore