28 research outputs found

    Privacy, Security and Trust in the Internet of Neurons

    Get PDF
    Arpanet, Internet, Internet of Services, Internet of Things, Internet of Skills. What next? We conjecture that in 15-20 years from now we will have the Internet of Neurons, a new Internet paradigm in which humans will be able to connect bi-directionally to the net using only their brain. The Internet of Neurons will provide new, tremendous opportunities thanks to constant access to unlimited information. It will empower all those outside of the technical industry, actually it will empower all human beings, to access and use technological products and services as everybody will be able to connect, even without possessing a laptop, a tablet or a smartphone. The Internet of Neurons will thus ultimately complete the currently still immature democratization of knowledge and technology. But it will also bring along several enormous challenges, especially concerning security (as well as privacy and trust). In this paper we speculate on the worldwide deployment of the Internet of Neurons by 2038 and brainstorm about its disruptive impact, discussing the main technological (and neurological) breakthroughs required to enable it, the new opportunities it provides and the security challenges it raises. We also elaborate on the novel system models, threat models and security properties that are required to reason about privacy, security and trust in the Internet of Neurons.Comment: 8 pages, 7 figure

    ToFFi – Toolbox for frequency-based fingerprinting of brain signals

    Get PDF
    Spectral fingerprints (SFs) are unique power spectra signatures of human brain regions of interest (ROIs, Keitel & Gross, 2016). SFs allow for accurate ROI identification and can serve as biomarkers of differences exhibited by non-neurotypical groups. At present, there are no open-source, versatile tools to calculate spectral fingerprints. We have filled this gap by creating a modular, highly-configurable MATLAB Toolbox for Frequency-based Fingerprinting (ToFFi). It can transform magnetoencephalographic and electroencephalographic signals into unique spectral representations using ROIs provided by anatomical (AAL, Desikan-Killiany), functional (Schaefer), or other custom volumetric brain parcellations. Toolbox design supports reproducibility and parallel computations

    Learning EEG Biometrics for Person Identification and Authentication

    Full text link
    EEG provides appealing biometrics by presenting some unique attributes, not possessed by common biometric modalities like fingerprints, retina and face scan, in terms of robustness against forgery, secrecy and privacy compliance, aliveness detection and potential of continuous authentication. Meanwhile, the use of EEG to provide cognitive indicators for human workload, fatigue and emotions has created an environment where EEG is well-integrated into systems, making it readily available for biometrics purposes. Yet, still, many challenges need to be properly addressed before any actual deployment of EEG-based biometric systems in real-life scenarios: 1) subjects' inconvenience during the signal acquisition process, 2) the relatively low recognition rates, and 3) the lack of robustness against diverse human states. To address the aforementioned issues, this thesis is devoted to learn biometric traits from EEG signals for stable person identification and authentication. State of the art studies of EEG biometrics are mainly divided into two categories, the event-related potential (ERP) category, which relies on a tight control of the cognitive states of the subjects, and the ongoing EEG category, which uses continuous EEG signals (mainly in resting state) naturally produced by the brain without any particular sensory stimulation. Studies in the ERP category focus more on the design of proper signal elicitation protocols or paradigms which usually require repetitive sensory stimulation. Ongoing EEG, on the contrary, is more flexible in terms of signal acquisition, but needs more advanced computational methods for feature extraction and classification. This study focuses on EEG biometrics using ongoing signals in diverse states. Such a flexible system could lead to an effective deployment in the real world. Specifically, this work focuses on ongoing EEG signals under diverse human states without strict task-specific controls in terms of brain response elicitation during signal acquisition. This is in contrast to previous studies that rely on specific sensory stimulation and synthetic cognitive tasks to tightly control the cognitive state of the subject being reflected in the resulting EEG activity, or to use resting state EEG signals. The relaxation of the reliance of the user's cognitive state makes the signal acquisition process streamlined, which in turn facilitates the actual deployment of the EEG biometrics system. Furthermore, not relying on sensory stimulation and cognitive tasks also allows for flexible and unobtrusive biometric systems that work in the background without interrupting the users, which is especially important in continuous scenarios. However, relaxing the system's reliance on the human state also means losing control of the EEG activity produced. As a result, EEG signals captured from the scalp may be contaminated by the active involvement of the tasks and cognitive states such as workload and emotion. Therefore, it becomes a challenge to learn identity-bearing information from the complicated signals to support high stability EEG biometrics. Possible solutions are proposed and investigated from two main perspectives, feature extraction and pattern classification. Specifically, graph features and learning models are proposed based on the brain connectivity, graph theory, and deep learning algorithms. A comprehensive investigation is conducted to assess the performance of proposed methods and existing methods in biometric identification and authentication, including in continuous scenarios. The methods and experiments are reported and detailed in the corresponding chapters, with the results obtained from data analysis

    Abordagem CNN 2D estendida para o diagnóstico da doença de Alzheimer através de imagens de ressonância magnética estrutural

    Get PDF
    Orientadores: Leticia Rittner, Roberto de Alencar LotufoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A doença de Alzheimer (AD - Alzheimer's disease) é um tipo de demência que afeta milhões de pessoas em todo o mundo. Até o momento, não há cura para a doença e seu diagnóstico precoce tem sido uma tarefa desafiadora. As técnicas atuais para o seu diagnóstico têm explorado as informações estruturais da Imagem por Ressonância Magnética (MRI - Magnetic Resonance Imaging) em imagens ponderadas em T1. Entre essas técnicas, a rede neural convolucional (CNN - Convolutional Neural Network) é a mais promissora e tem sido usada com sucesso em imagens médicas para uma variedade de aplicações devido à sua capacidade de extração de características. Antes do grande sucesso do aprendizado profundo e das CNNs, os trabalhos que objetivavam classificar os diferentes estágios de AD exploraram abordagens clássicas de aprendizado de máquina e uma meticulosa extração de características, principalmente para classificar testes binários. Recentemente, alguns autores combinaram técnicas de aprendizagem profunda e pequenos subconjuntos do conjunto de dados públicos da Iniciativa de Neuroimagem da Doença de Alzheimer (ADNI - Alzheimer's Disease Neuroimaging Initiative) para prever um estágio inicial da doença explorando abordagens 3D CNN geralmente combinadas com arquiteturas de auto-codificador convolucional 3D. Outros também exploraram uma abordagem de CNN 3D combinando-a ou não com uma etapa de pré-processamento para a extração de características. No entanto, a maioria desses trabalhos focam apenas na classificação binária, sem resultados para AD, comprometimento cognitivo leve (MCI - Mild Cognitive Impairment) e classificação de sujeitos normais (NC - Normal Control). Nosso principal objetivo foi explorar abordagens de CNN 2D para a tarefa de classificação das 3 classes usando imagens de MRI ponderadas em T1. Como objetivo secundário, preenchemos algumas lacunas encontradas na literatura ao investigar o uso de arquiteturas CNN 2D para o nosso problema, uma vez que a maioria dos trabalhos explorou o aprendizado de máquina clássico ou abordagens CNN 3D. Nossa abordagem CNN 2D estendida explora as informações volumétricas dos dados de ressonância magnética, mantendo baixo custo computacional associado a uma abordagem 2D, quando comparados às abordagens 3D. Além disso, nosso resultado supera as outras estratégias para a classificação das 3 classes e comparando o desempenho de nosso modelo com os métodos tradicionais de aprendizado de máquina e 3D CNN. Também investigamos o papel de diferentes técnicas amplamente utilizadas em aplicações CNN, por exemplo, pré-processamento de dados, aumento de dados, transferência de aprendizado e adaptação de domínio para um conjunto de dados brasileiroAbstract: Alzheimer's disease (AD) is a type of dementia that affects millions of people around the world. To date, there is no cure for Alzheimer's and its early-diagnosis has been a challenging task. The current techniques for Alzheimer's disease diagnosis have explored the structural information of Magnetic Resonance Imaging (MRI) in T1-weighted images. Among these techniques, deep convolutional neural network (CNN) is the most promising one and has been successfully used in medical images for a variety of applications due to its ability to perform features extraction. Before the great success of deep learning and CNNs, the works that aimed to classify the different stages of AD explored classic machine learning approaches and a meticulous feature engineering extraction, mostly to classify binary tasks. Recently, some authors have combined deep learning techniques and small subsets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) public dataset, to predict an early-stage of AD exploring 3D CNN approaches usually combined with 3D convolutional autoencoder architectures. Others have also investigated a 3D CNN approach combining it or not with a pre-processing step for the extraction of features. However, the majority of these papers focus on binary classification only, with no results for Alzheimer's disease, Mild Cognitive Impairment (MCI), and Normal Control (NC) classification. Our primary goal was to explore 2D CNN approaches to tackle the 3-class classification using T1-weighted MRI. As a secondary goal, we filled some gaps we found in the literature by investigating the use of 2D CNN architectures to our problem, since most of the works either explored traditional machine learning or 3D CNN approaches. Our extended-2D CNN explores the MRI volumetric data information while maintaining the low computational costs associated with a 2D approach when compared to 3D-CNNs. Besides, our result overcomes the other strategies for the 3-class classification while analyzing the performance of our model with traditional machine-learning and 3D-CNN methods. We also investigated the role of different widely used techniques in CNN applications, for instance, data pre-processing, data augmentation, transfer-learning, and domain-adaptation to a Brazilian datasetMestradoEngenharia de ComputaçãoMestra em Engenharia Elétrica168468/2017-4  CNP

    Detecting and visualizing differences in brain structures with SPHARM and functional data analysis

    Get PDF
    A new procedure for classifying brain structures described by SPHARM is presented. We combine a dimension reduction technique (functional principal component analysis or functional independent component analysis) with stepwise variable selection for linear discriminant classification. This procedure is compared with many well-known methods in a novel classification problem in neuroeducation, where the reversal error (a common error in mathematical problem solving) is analyzed by using the left and right putamens of 33 participants. The comparison shows that our proposal not only provides outstanding performance in terms of predictive power, but it is also valuable in terms of interpretation, since it yields a linear discriminant function for 3D structures
    corecore