116 research outputs found

    The saccadic spike artifact in MEG

    Get PDF
    Electro- and magnetoencephalography (EEG/MEG) are the means to investigate the dynamics of neuronal activity non-invasively in the human brain. However, both EEG and MEG are also sensitive to non-neural sources, which can severely complicate the interpretation. The saccadic spike potential (SP) at saccade onset has been identified as a particularly problematic artifact in EEG because it closely resembles synchronous neuronal gamma band activity. While the SP and its confounding effects on EEG have been thoroughly characterized, the corresponding artifact in MEG, the saccadic spike field (SF), has not been investigated. Here we provide a detailed characterization of the SF. We simultaneously recorded MEG, EEG, gaze position and electrooculogram (EOG). We compared the SF in MEG for different saccade sizes and directions and contrasted it with the well-known SP in EEG. Our results reveal a saccade amplitude and direction dependent, lateralized saccadic spike artifact, which was most prominent in the gamma frequency range. The SF was strongest at frontal and temporal sensors but unlike the SP in EEG did not contaminate parietal sensors. Furthermore, we observed that the source configurations of the SF were comparable for regular and miniature saccades. Using distributed source analysis we identified the sources of the SF in the extraocular muscles. In summary, our results show that the SF in MEG closely resembles neuronal activity in frontal and temporal sensors. Our detailed characterization of the SF constitutes a solid basis for assessing possible saccadic spike related contamination in MEG experiments.the European Union ; the German Federal Ministry of Education and Researchpublisher versio

    On Motion Analysis in Computer Vision with Deep Learning: Selected Case Studies

    Get PDF
    Motion analysis is one of the essential enabling technologies in computer vision. Despite recent significant advances, image-based motion analysis remains a very challenging problem. This challenge arises because the motion features are extracted directory from a sequence of images without any other meta data information. Extracting motion information (features) is inherently more difficult than in other computer vision disciplines. In a traditional approach, the motion analysis is often formulated as an optimisation problem, with the motion model being hand-crafted to reflect our understanding of the problem domain. The critical element of these traditional methods is a prior assumption about the model of motion believed to represent a specific problem. Data analytics’ recent trend is to replace hand-crafted prior assumptions with a model learned directly from observational data with no, or very limited, prior assumptions about that model. Although known for a long time, these approaches, based on machine learning, have been shown competitive only very recently due to advances in the so-called deep learning methodologies. This work's key aim has been to investigate novel approaches, utilising the deep learning methodologies, for motion analysis where the motion model is learned directly from observed data. These new approaches have focused on investigating the deep network architectures suitable for the effective extraction of spatiotemporal information. Due to the estimated motion parameters' volume and structure, it is frequently difficult or even impossible to obtain relevant ground truth data. Missing ground truth leads to choose the unsupervised learning methodologies which is usually represents challenging choice to utilize in already challenging high dimensional motion representation of the image sequence. The main challenge with unsupervised learning is to evaluate if the algorithm can learn the data model directly from the data only without any prior knowledge presented to the deep learning model during In this project, an emphasis has been put on the unsupervised learning approaches. Owning to a broad spectrum of computer vision problems and applications related to motion analysis, the research reported in the thesis has focused on three specific motion analysis challenges and corresponding practical case studies. These include motion detection and recognition, as well as 2D and 3D motion field estimation. Eyeblinks quantification has been used as a case study for the motion detection and recognition problem. The approach proposed for this problem consists of a novel network architecture processing weakly corresponded images in an action completion regime with learned spatiotemporal image features fused using cascaded recurrent networks. The stereo-vision disparity estimation task has been selected as a case study for the 2D motion field estimation problem. The proposed method directly estimates occlusion maps using novel convolutional neural network architecture that is trained with a custom-designed loss function in an unsupervised manner. The volumetric data registration task has been chosen as a case study for the 3D motion field estimation problem. The proposed solution is based on the 3D CNN, with a novel architecture featuring a Generative Adversarial Network used during training to improve network performance for unseen data. All the proposed networks demonstrated a state-of-the-art performance compared to other corresponding methods reported in the literature on a number of assessment metrics. In particular, the proposed architecture for 3D motion field estimation has shown to outperform the previously reported manual expert-guided registration methodology

    Mental-State Estimation, 1987

    Get PDF
    Reports on the measurement and evaluation of the physiological and mental state of operators are presented

    The oscillatory fingerprints of self-prioritization : Novel markers in spectral EEG for self-relevant processing

    Get PDF
    Funding Information: The research reported in this article was supported by a Grant from the Deutsche Forschungsgemeinschaft (SCHA 2253/1–1). Open Access funding enabled and organized by Projekt DEAL.Peer reviewedPublisher PD

    Retinal image quality assessment using deep convolutional neural networks

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)Diabetic Retinopathy (DR) and diabetic macular edema (DME) are the damages caused to the retina and are complications that can affect the diabetic population. Diabetic retinopathy (DR), is the most common disease due to the presence of exudates and has three levels of severity, such as mild, moderate and severe, depending on the exudates distribution in the retina. For screening of diabetic retinopathy or a population-based clinical study, a large number of digital fundus images are captured and to be possible to recognize the signs of DR and DME, it is necessary that the images have quality, because low-quality images may force the patient to return for a second examination, wasting time and possibly delaying treatment. These images are evaluated by trained human experts, which can be a time-consuming and expensive task due to the number of images that need to be examined. Therefore, this is a field that would be hugely benefited with the development of an automated eye fundus quality assessment and analysis systems. It can potentially facilitate health care in remote regions and in developing countries where reading skills are scarce. Deep Learning is a kind of Machine Learning method that involves learning multi-level representations that begin with raw data entry and gradually moves to more abstract levels through non-linear transformations. With enough training data and sufficiently deep architectures, neural networks, such as Convolutional Neural Networks (CNN), can learn very complex functions and discover complex structures in the data. Thus, Deep Learning emerges as a powerful tool for medical image analysis and evaluation of retinal image quality using computer-aided diagnosis. Therefore, the aim of this study is to automatically assess all the three quality parameters alone (focus, illumination and color), and then an overall quality of fundus images assessment, classifying the images into the classes “accept” or “reject with a Deep Learning approach using convolutional neural networks (CNN). For the overall classification, the following results were obtained: test accuracy=97.89%, SN=97.9%, AUC=0.98 and 1-score=97.91%.A retinopatia diabética (RD) e o edema macular diabético (EMD) são patologias da retina e são uma complicação que pode afetar a população diabética. A retinopatia diabética é a doença mais comum devido à presença de exsudatos e possui três níveis de gravidade, como leve, moderado e grave, dependendo da distribuição dos exsudatos na retina. Para triagem da retinopatia diabética ou estudo clínico de base populacional, um grande número de imagens digitais de fundo do olho são capturadas e para ser possível reconhecer os sinais da RD e EMD, é necessário que as imagens tenham qualidade, pois imagens de baixa qualidade podem forçar o paciente a retornar para um segundo exame, perdendo tempo e, possivelmente, retardando o tratamento. Essas imagens são avaliadas por especialistas humanos treinados, o que pode ser uma tarefa demorada e cara devido ao número de imagens que precisam de ser examinadas. Portanto, este é um campo que seria enormemente beneficiado com o desenvolvimento de sistemas automatizados de avaliação e análise da qualidade da imagem do fundo de olho. Pode potencialmente facilitar a assistência médica em regiões remotas e em países em desenvolvimento, onde as habilidades de leitura são escassas. Deep Learning é um tipo de método de Machine Learning que envolve a aprendizagem de representações em vários níveis que começam com a entrada de dados brutos e gradualmente se transformam para níveis mais abstratos através de transformações não lineares, para se obterem as previsões. Com dados de treino suficientes e arquiteturas suficientemente profundas, as redes neuronais, como as Convolutional Neural Networks (CNN), podem aprender funções muito complexas e descobrir estruturas complexas nos dados. Assim, o Deep Learning surge como uma ferramenta poderosa para analisar imagens médicas para avaliação da qualidade da retina, usando diagnóstico auxiliado por computador a partir do fundo do olho. Portanto, o objetivo deste estudo é avaliar automaticamente a qualidade geral das imagens do fundo, classificando as imagens em “aceites” ou “rejeitadas”, com base em três parâmetros principais, como o foco, a iluminação e cor com abordagem de Deep Learning usando convolutional neural networks (CNN). Para a classificação geral da qualidade das imagens, obtiveram-se os seguintes resultados: acurácia do teste = 97,89%, SN = 97,9%, AUC = 0,98 e 1-score=97.91%

    Technology applications

    Get PDF
    A summary of NASA Technology Utilization programs for the period of 1 December 1971 through 31 May 1972 is presented. An abbreviated description of the overall Technology Utilization Applications Program is provided as a background for the specific applications examples. Subjects discussed are in the broad headings of: (1) cancer, (2) cardiovascular disease, (2) medical instrumentation, (4) urinary system disorders, (5) rehabilitation medicine, (6) air and water pollution, (7) housing and urban construction, (8) fire safety, (9) law enforcement and criminalistics, (10) transportation, and (11) mine safety

    Affective Brain-Computer Interfaces

    Get PDF

    Early sensory attention and pupil size in cognitive control : an EEG approach

    Get PDF

    Varhaisten visuaalisten EEG vasteiden mallintaminen kuva-ärsykkeen ominaisuuksien mukaan

    Get PDF
    Introduction Experiment consisting of a visual search was conducted in order to investigate the effect of top-down and bottom-up processing with natural stimuli. Gray-scale photographs of nature scenes were used as stimuli. Stimuli had two conditions: natural and scrambled. Natural images were unaltered photographs and in scrambled images the global information was reduced. Aim Aim of the experiment was to form statistical models explaining the effect the amplitude of early visual potentials as functions of visual input, oculomotor variables and top-down factors. Methods Magnetoencephalography (MEG), electroencephalography (EEG) and eye tracking were recorded simultaneously during the experiment. In the scope of this thesis only EEG and eye tracking were analysed. Statistical models were generated using a method called linear mixed (effect) modeling. Results Data analysis produced two models describing the early visual potential as parameters of visual input and oculomotor variables. The effect of top-down processing was investigated as an additional statistical test. Conclusion Out of the two generated models the visual input model was deemed more accurate due to spatial focality and amplitude latency. Results of the study indicate that early visual responses in EEG correlate strongly with low-level visual inputs and to a lesser degree with oculomotor variables. No evidence of correlation between response amplitude and top-down factors were observed.Johdanto Visuaalista tarkkaavaisuutta tutkittiin koeasetelmassa, jossa koehenkilöt etsivät heille esitettyjä kuvan osia suuremmista luontovalokuvista. Kuvia oli kahta tyyppiä: muokkaamattomia, sekä kuvia joista globaali informaatio oli hävitetty. Tavoite Kokeen tavoitteena oli kehittää visuaalisia hermostovasteita kuvaava tilastollinen malli. Mallin avulla oli tarkoitus tutkia kuinka matalan ja korkean tason tarkkaavaisuusprosessit vaikuttavat mitattuun vasteeseen. Menetelmät Kokeen aikana koehenkilöiltä mitattiin aivosähkökäyrää (EEG), aivomagneettikäyrää (MEG) sekä silmänliikettä. Kerätty data analysoitiin mallintamalla katseeseen synkronisoituja EEG-vasteita tilastollisesti. Kokeessa kerättyä MEG dataa ei analysoitu tämän työn puitteissa. Mallintamiseen käytettiin lineaarisia sekamalleja, jotka muodostettiin ärsykekuvien, silmänliikkeiden ja tarkkaavaisuusprosessien avulla. Tulokset Mallintaminen tuotti kaksi erilaista mallia, jotka selittivät syntyneen vasteen visuaali-informaation ja silmänliikkeiden perusteella. Korkean tason tarkkaavaisuuden vaikutusta tutkittiin molemmissa malleissa ylimääräisellä tilastollisella testillä. Johtopäätökset Kahdesta tuotetusta mallista visuaalimalli vaikutti todenmukaisemmalta. Silmänliikemallin tulokset olivat puolestaan epävarmempia sijaintinsa ja esiintymislatenssinsa takia. Kummankaan mallin tapauksessa ei havaittu todisteita korkean tason tarkkaavaisuuden vaikutuksesta vasteisiin
    corecore