130 research outputs found

    Exploiting Cross Domain Relationships for Target Recognition

    Get PDF
    Cross domain recognition extracts knowledge from one domain to recognize samples from another domain of interest. The key to solving problems under this umbrella is to find out the latent connections between different domains. In this dissertation, three different cross domain recognition problems are studied by exploiting the relationships between different domains explicitly according to the specific real problems. First, the problem of cross view action recognition is studied. The same action might seem quite different when observed from different viewpoints. Thus, how to use the training samples from a given camera view and perform recognition in another new view is the key point. In this work, reconstructable paths between different views are built to mirror labeled actions from one source view into one another target view for learning an adaptable classifier. The path learning takes advantage of the joint dictionary learning techniques with exploiting hidden information in the seemingly useless samples, making the recognition performance robust and effective. Second, the problem of person re-identification is studied, which tries to match pedestrian images in non-overlapping camera views based on appearance features. In this work, we propose to learn a random kernel forest to discriminatively assign a specific distance metric to each pair of local patches from the two images in matching. The forest is composed by multiple decision trees, which are designed to partition the overall space of local patch-pairs into substantial subspaces, where a simple but effective local metric kernel can be defined to minimize the distance of true matches. Third, the problem of multi-event detection and recognition in smart grid is studied. The signal of multi-event might not be a straightforward combination of some single-event signals because of the correlation among devices. In this work, a concept of ``root-pattern\u27\u27 is proposed that can be extracted from a collection of single-event signals, but also transferable to analyse the constituent components of multi-cascading-event signals based on an over-complete dictionary, which is designed according to the ``root-patterns\u27\u27 with temporal information subtly embedded. The correctness and effectiveness of the proposed approaches have been evaluated by extensive experiments

    False Data Injection Attacks in Smart Grids: State of the Art and Way Forward

    Full text link
    In the recent years cyberattacks to smart grids are becoming more frequent Among the many malicious activities that can be launched against smart grids False Data Injection FDI attacks have raised significant concerns from both academia and industry FDI attacks can affect the internal state estimation processcritical for smart grid monitoring and controlthus being able to bypass conventional Bad Data Detection BDD methods Hence prompt detection and precise localization of FDI attacks is becomming of paramount importance to ensure smart grids security and safety Several papers recently started to study and analyze this topic from different perspectives and address existing challenges Datadriven techniques and mathematical modelings are the major ingredients of the proposed approaches The primary objective of this work is to provide a systematic review and insights into FDI attacks joint detection and localization approaches considering that other surveys mainly concentrated on the detection aspects without detailed coverage of localization aspects For this purpose we select and inspect more than forty major research contributions while conducting a detailed analysis of their methodology and objectives in relation to the FDI attacks detection and localization We provide our key findings of the identified papers according to different criteria such as employed FDI attacks localization techniques utilized evaluation scenarios investigated FDI attack types application scenarios adopted methodologies and the use of additional data Finally we discuss open issues and future research direction

    Audio source separation for music in low-latency and high-latency scenarios

    Get PDF
    Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Interactions of Viral and Cellular Helicases

    Get PDF
    The innate immune system is a part of the first line of defense against virus infection. An important subset of the innate immune system consists of a group of intracellular pattern recognition receptors (PRRs) which recognize conserved features of bacteria and viruses and initiate an interferon response. The RIG-I like receptors (RLRs) are PRRs that bind to RNA viruses (such as hepatitis c virus) and signal through the adaptor mitochondrial anti-viral signaling protein (MAVS). Hepatitis C virus (HCV) is a small enveloped RNA virus that belongs to the flaviviridae family of viruses. HCV infects hepatocytes and can cause a persistent infection. If a chronic infection is established, progressive liver damage along with cirrhosis and sometimes hepatocellular carcinoma may occur. The multi-functional HCV non-structural-3 (NS3) protein is essential for HCV replication and contains covalently linked protease and helicase/ATPase domains. A covalently linked protease and helicase is unique to the flaviviridae family of viruses and it is unclear why the two functions are linked. There are multiple effective direct acting anti-virals which target the protease, but none currently approved which inhibit the helicase. In addition to aiding in viral replication, the NS3 protease assists HCV in establishing a persistent infection through cleaving the innate immune RIG-I adaptor protein, MAVS. The purpose of the studies contained in this thesis is to gain a greater understanding of the function and purpose of the covalently linked HCV NS3 protease and helicase. Förster resonance energy transfer (FRET) is used to explore the interaction of NS3 with RIG-I like receptor proteins and to use FRET to look at the interaction of the RLRs with themselves. The interaction of the NS3 protease and helicase domain was probed through the exploration of the mechanism of action of a NS3 inhibitor (HPI) which is able to inhibit both the protease and helicase functions of NS3, while not disrupting the ATPase activity. The activity of HPI was determined in vitro using a fluorescent protease cleavage assay and a fluorescent helicase unwinding assay. HPI can inhibit both functions with low micro-molar EC50. Next, analysis of HPI to inhibit peptide hydrolysis by wild-type NS3 and a set of NS3 mutants with mutations in the protease domain, helicase domain, and the allosteric groove between the protease and helicase domain suggested that HPI forms a bridge between the NS3 helicase RNA-binding site and the allosteric groove between the protease and helicase domains. The activity of HPI was measured in cells using an HCV sub-genomic replicon tagged with a luciferase reporter. The inhibition of HPI alone and in the presence of other protease inhibitors was tested. HPI can inhibit the HCV genotype 1b sub-genomic replicon and when applied in conjunction with first generation protease inhibitors, telaprevir and boceprevir, the inhibition was additive, as defined by the Bliss Independence Model of additive inhibition. However, when HPI was used in conjunction with macro-cyclic protease inhibitors, danoprevir and grazoprevir, modest synergy was observed. To look at the protein:protein interactions of the NS3 helicase and the RIG-I like receptor helicases in live cells, a series of quantitative FRET spectrometry studies were employed. Quantitative micro-spectroscopic imaging (Q-MSI) is a technique which uses a fluorescent dye or fluorescent protein to identify sub-cellular regions and then calculates Förster Resonance Energy Transfer (FRET) efficiency and the concentrations of the donor and acceptor proteins. The technique was first applied in vitro with a fluorescently tagged NS3 helicase and fluorescently tagged DNA molecules. Next, the technique was applied to combinations of recombinant fluorescently tagged helicases expressed in HEK293T cells. The NS3 helicase, RIG-I like receptor helicases, DDX1, DDX3, and DDX5 helicases, and MAVS were all designed to express off plasmids which also encode and attach a fluorescent protein. The fluorescent proteins used were either cyan fluorescent protein (CFP), enhanced green fluorescent protein-2 (GFP2), yellow fluorescent protein (YFP) or Venus fluorescent protein and each combination included a donor (CFP or GFP2) and an acceptor (YFP or Venus) fluorescent protein. The combinations were tested in presence or absence of polyinosinic-polyctyidlic acid (poly I:C) which is a synthetic RNA analog capable of eliciting an RLR response. To localize the interaction to the mitochondria, the mitochondrial stain, Mito-Tracker-Red, was used in some experiments. The experiments revealed a previously unknown interaction between NS3 and the RLR protein, laboratory of genetics and physiology protein-2 (LGP2) which may be biologically relevant. In addition, the relocation of LGP2 cytoplasmic foci in cells over-expressing DDX3 was observed. Q-MSI was used to visualize previously known interactions of RLRs at the mitochondria and in conjunction with MAVS

    Contributions to Ensemble Classifiers with Image Analysis Applications

    Get PDF
    134 p.Ésta tesis tiene dos aspectos fundamentales, por un lado, la propuesta denuevas arquitecturas de clasificadores y, por otro, su aplicación a el análisis deimagen.Desde el punto de vista de proponer nuevas arquitecturas de clasificaciónla tesis tiene dos contribucciones principales. En primer lugar la propuestade un innovador ensemble de clasificadores basado en arquitecturas aleatorias,como pueden ser las Extreme Learning Machines (ELM), Random Forest (RF) yRotation Forest, llamado Hybrid Extreme Rotation Forest (HERF) y su mejoraAnticipative HERF (AHERF) que conlleva una selección del modelo basada enel rendimiento de predicción para cada conjunto de datos específico. Ademásde lo anterior, proveemos una prueba formal tanto del AHERF, como de laconvergencia de los ensembles de regresores ELMs que mejoran la usabilidad yreproducibilidad de los resultados.En la vertiente de aplicación hemos estado trabajando con dos tipos de imágenes:imágenes hiperespectrales de remote sensing, e imágenes médicas tanto depatologías específicas de venas de sangre como de imágenes para el diagnósticode Alzheimer. En todos los casos los ensembles de clasificadores han sido la herramientacomún además de estrategias especificas de aprendizaje activo basadasen dichos ensembles de clasificadores. En el caso concreto de la segmentaciónde vasos sanguíneos nos hemos enfrentado con problemas, uno relacionado conlos trombos del Aneurismas de Aorta Abdominal en imágenes 3D de tomografíacomputerizada y el otro la segmentación de venas sangineas en la retina. Losresultados en ambos casos en términos de rendimiento en clasificación y ahorrode tiempo en la segmentación humana nos permiten recomendar esos enfoquespara la práctica clínica.Chapter 1Background y contribuccionesDado el espacio limitado para realizar el resumen de la tesis hemos decididoincluir un resumen general con los puntos más importantes, una pequeña introducciónque pudiera servir como background para entender los conceptos básicosde cada uno de los temas que hemos tocado y un listado con las contribuccionesmás importantes.1.1 Ensembles de clasificadoresLa idea de los ensembles de clasificadores fue propuesta por Hansen y Salamon[4] en el contexto del aprendizaje de las redes neuronales artificiales. Sutrabajo mostró que un ensemble de redes neuronales con un esquema de consensogrupal podía mejorar el resultado obtenido con una única red neuronal.Los ensembles de clasificadores buscan obtener unos resultados de clasificaciónmejores combinando clasificadores débiles y diversos [8, 9]. La propuesta inicialde ensemble contenía una colección homogena de clasificadores individuales. ElRandom Forest es un claro ejemplo de ello, puesto que combina la salida de unacolección de árboles de decisión realizando una votación por mayoría [2, 3], yse construye utilizando una técnica de remuestreo sobre el conjunto de datos ycon selección aleatoria de variables.2CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 31.2 Aprendizaje activoLa construcción de un clasificador supervisado consiste en el aprendizaje de unaasignación de funciones de datos en un conjunto de clases dado un conjunto deentrenamiento etiquetado. En muchas situaciones de la vida real la obtenciónde las etiquetas del conjunto de entrenamiento es costosa, lenta y propensa aerrores. Esto hace que la construcción del conjunto de entrenamiento sea unatarea engorrosa y requiera un análisis manual exaustivo de la imagen. Esto se realizanormalmente mediante una inspección visual de las imágenes y realizandoun etiquetado píxel a píxel. En consecuencia el conjunto de entrenamiento esaltamente redundante y hace que la fase de entrenamiento del modelo sea muylenta. Además los píxeles ruidosos pueden interferir en las estadísticas de cadaclase lo que puede dar lugar a errores de clasificación y/o overfitting. Por tantoes deseable que un conjunto de entrenamiento sea construido de una manera inteligente,lo que significa que debe representar correctamente los límites de clasemediante el muestreo de píxeles discriminantes. La generalización es la habilidadde etiquetar correctamente datos que no se han visto previamente y quepor tanto son nuevos para el modelo. El aprendizaje activo intenta aprovecharla interacción con un usuario para proporcionar las etiquetas de las muestrasdel conjunto de entrenamiento con el objetivo de obtener la clasificación másprecisa utilizando el conjunto de entrenamiento más pequeño posible.1.3 AlzheimerLa enfermedad de Alzheimer es una de las causas más importantes de discapacidaden personas mayores. Dado el envejecimiento poblacional que es una realidaden muchos países, con el aumento de la esperanza de vida y con el aumentodel número de personas mayores, el número de pacientes con demencia aumentarátambién. Debido a la importancia socioeconómica de la enfermedad enlos países occidentales existe un fuerte esfuerzo internacional focalizado en laenfermedad del Alzheimer. En las etapas tempranas de la enfermedad la atrofiacerebral suele ser sutil y está espacialmente distribuida por diferentes regionescerebrales que incluyen la corteza entorrinal, el hipocampo, las estructuras temporaleslateral e inferior, así como el cíngulo anterior y posterior. Son muchoslos esfuerzos de diseño de algoritmos computacionales tratando de encontrarbiomarcadores de imagen que puedan ser utilizados para el diagnóstico no invasivodel Alzheimer y otras enfermedades neurodegenerativas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 41.4 Segmentación de vasos sanguíneosLa segmentación de los vasos sanguíneos [1, 7, 6] es una de las herramientas computacionalesesenciales para la evaluación clínica de las enfermedades vasculares.Consiste en particionar un angiograma en dos regiones que no se superponen:la región vasculares y el fondo. Basándonos en los resultados de dicha particiónse pueden extraer, modelar, manipular, medir y visualizar las superficies vasculares.Éstas estructuras son muy útiles y juegan un rol muy imporntate en lostratamientos endovasculares de las enfermedades vasculares. Las enfermedadesvasculares son una de las principales fuentes de morbilidad y mortalidad en todoel mundo.Aneurisma de Aorta Abdominal El Aneurisma de Aorta Abdominal (AAA)es una dilatación local de la Aorta que ocurre entre las arterias renal e ilíaca. Eldebilitamiento de la pared de la aorta conduce a su deformación y la generaciónde un trombo. Generalmente, un AAA se diagnostica cuando el diámetro anterioposteriormínimo de la aorta alcanza los 3 centímetros [5]. La mayoría delos aneurismas aórticos son asintomáticos y sin complicaciones. Los aneurismasque causan los síntomas tienen un mayor riesgo de ruptura. El dolor abdominalo el dolor de espalda son las dos principales características clínicas que sugiereno bien la reciente expansión o fugas. Las complicaciones son a menudo cuestiónde vida o muerte y pueden ocurrir en un corto espacio de tiempo. Por lo tanto,el reto consiste en diagnosticar lo antes posible la aparición de los síntomas.Imágenes de Retina La evaluación de imágenes del fondo del ojo es una herramientade diagnóstico de la patología vascular y no vascular. Dicha inspecciónpuede revelar hipertensión, diabetes, arteriosclerosis, enfermedades cardiovascularese ictus. Los principales retos para la segmentación de vasos retinianos son:(1) la presencia de lesiones que se pueden interpretar de forma errónea comovasos sanguíneos; (2) bajo contraste alrededor de los vasos más delgados, (3)múltiples escalas de tamaño de los vasos.1.5 ContribucionesÉsta tesis tiene dos tipos de contribuciones. Contribuciones computacionales ycontribuciones orientadas a una aplicación o prácticas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 5Desde un punto de vista computacional las contribuciones han sido las siguientes:¿ Un nuevo esquema de aprendizaje activo usando Random Forest y el cálculode la incertidumbre que permite una segmentación de imágenes rápida,precisa e interactiva.¿ Hybrid Extreme Rotation Forest.¿ Adaptative Hybrid Extreme Rotation Forest.¿ Métodos de aprendizaje semisupervisados espectrales-espaciales.¿ Unmixing no lineal y reconstrucción utilizando ensembles de regresoresELM.Desde un punto de vista práctico:¿ Imágenes médicas¿ Aprendizaje activo combinado con HERF para la segmentación deimágenes de tomografía computerizada.¿ Mejorar el aprendizaje activo para segmentación de imágenes de tomografíacomputerizada con información de dominio.¿ Aprendizaje activo con el clasificador bootstrapped dendritic aplicadoa segmentación de imágenes médicas.¿ Meta-ensembles de clasificadores para detección de Alzheimer conimágenes de resonancia magnética.¿ Random Forest combinado con aprendizaje activo para segmentaciónde imágenes de retina.¿ Segmentación automática de grasa subcutanea y visceral utilizandoresonancia magnética.¿ Imágenes hiperespectrales¿ Unmixing no lineal y reconstrucción utilizando ensembles de regresoresELM.¿ Métodos de aprendizaje semisupervisados espectrales-espaciales concorrección espacial usando AHERF.¿ Método semisupervisado de clasificación utilizando ensembles de ELMsy con regularización espacial

    High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework

    Get PDF
    High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification. In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme. For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy. Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%

    Statistical Analysis of Disturbances in Power Transmission Systems

    Get PDF
    Disturbance analysis is essential to the study of the power transmission systems. Traditionally, disturbances are described as megawatt (MW) events, but the access to data is inefficient due to the slow installation and authorization process of the monitoring device. In this paper, we propose a novel approach to disturbance analysis conducted at the distribution level by exploiting the frequency recordings from Frequency Disturbance Recorders (FDRs) of the Frequency Monitoring Network (FNET/GridEye), based on the relationship between frequency change and the power loss of disturbances - linearly associated by the Frequency Response. We first analyze the real disturbance records of North America (1992 to 2009) and confirm the power law distribution; we discover that small disturbances are log-normal distributed. Then based on the real records from 2011 to 2013 (EI), the disturbances in megawatt and the corresponding frequency change records are studied in parallel. We prove that the frequency change of disturbances and its megawatt records share similar power law distribution when the disturbances are large; the frequency change can be delineated by a log-normal distribution with its numerically approximated coefficient when the disturbances are small. Meanwhile, activities like FIDVR in the power systems reflected as voltage signature patterns recorded at the transmission level are worth studying since each pattern corresponds to a certain type of behavior. Pattern recognition is used in this problem. Initially the records are preprocessed through eliminating ineligible records and rescaling. Feature extraction is applied to obtain a better representation of signature dataset by statistics of amplitude, wavelet transform and Fourier transform. With the extracted features, k-means, an unsupervised clustering algorithm is exploited to generate root patterns; furthermore we use heuristic selection to remove the mis-classified patterns. The extracted root patterns then serve as training dataset to train a support vector machine (SVM). After the parameters of kernel function in SVM is optimized, a subset of voltage signature records is generated as testing dataset, based on which the performance of SVM is evaluated. With all patterns we achieve an accuracy of 80.12% of multi-label classification; and if only considering dominant patterns, the accuracy reaches 86.20%

    Predictability of epileptic seizures by fusion of scalp EEG and fMRI

    Get PDF
    The systems for prediction of epileptic seizure investigated in recent years mainly rely on the traditional nonlinear analysis of the brain signals from intracranial electroencephalograph (EEG) recordings. The overall objective of this work focuses on investigation of the predictability of seizure from the scalp signals by applying effective blind source separation (BSS) techniques to scalp EEGs, in which the epileptic seizures are considered as independent components of the scalp EEGs. The ultimate goal of the work is to pave the way for epileptic seizure prediction from the scalp EEG. The main contributions of this research are summarized as follows. Firstly, a novel constrained topographic independent component analysis (CTICA) algorithm is developed for the improved separation of the epileptic seizure signals. The related CTICA model is more suitable for brain signal separation due to the relaxation of the independence assumption, as the source signals geometrically close to each other are assumed to have some dependencies. By incorporating the spatial and frequency information of seizure signals as the constraint, CTICA achieves a better performance in separating the seizure signals in comparison with other conventional ICA methods. Secondly, the predictability of seizure is investigated. The traditional method for quantification of the nonlinear dynamics of time series is employed to quantify the level of chaos of the estimated sources. The simultaneously recorded intracranial and scalp EEGs are used for the comparison of the results. The experiment results demonstrate that the separated seizure sources have a similar transition trend as those achieved from the intracranial EEGs. Thirdly, simultaneously recorded EEG and functional Magnetic Resonance Imaging (fMRI) is studied in order to validate the activated area of the brain related to the seizure sources. An effective method to remove the fMRI scanner artifacts from the scalp EEG is established by applying the blind source extraction (BSE) algorithm. The results show that the effect of fMRI scanner artifacts has been reduced in scalp EEG recordings. Finally, a data driven model, spatial ICA (SICA) subject to EEG as the temporal constraint is proposed in order to detect the Blood Oxygen-Level Dependence (BOLD) from the seizure fMRI. In contrast to the popular model driven method General Linear Model (GLM), SICA does not rely on any predefined hemodynamic response function. It is based on the fact that brain areas executing different tasks are spatially independent. Therefore SICA works perfectly for non-event-related fMRI analysis such as seizure fMRI. By incorporating the temporal information existing within the EEG as the constraint, the superiority of the proposed constrained SICA is validated in terms of better algorithm convergence and a higher correlation between the time courses of the component and the seizure EEG signals as compared to SICA
    corecore