5,009 research outputs found

    Improving the performance of translation wavelet transform using BMICA

    Get PDF
    Research has shown Wavelet Transform to be one of the best methods for denoising biosignals. Translation-Invariant form of this method has been found to be the best performance. In this paper however we utilize this method and merger with our newly created Independent Component Analysis method – BMICA. Different EEG signals are used to verify the method within the MATLAB environment. Results are then compared with those of the actual Translation-Invariant algorithm and evaluated using the performance measures Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Distortion Ratio (SDR), and Signal to Interference Ratio (SIR). Experiments revealed that the BMICA Translation-Invariant Wavelet Transform out performed in all four measures. This indicates that it performed superior to the basic Translation- Invariant Wavelet Transform algorithm producing cleaner EEG signals which can influence diagnosis as well as clinical studies of the brain

    Compression algorithms for biomedical signals and nanopore sequencing data

    Get PDF
    The massive generation of biological digital information creates various computing challenges such as its storage and transmission. For example, biomedical signals, such as electroencephalograms (EEG), are recorded by multiple sensors over long periods of time, resulting in large volumes of data. Another example is genome DNA sequencing data, where the amount of data generated globally is seeing explosive growth, leading to increasing needs for processing, storage, and transmission resources. In this thesis we investigate the use of data compression techniques for this problem, in two different scenarios where computational efficiency is crucial. First we study the compression of multi-channel biomedical signals. We present a new lossless data compressor for multi-channel signals, GSC, which achieves compression performance similar to the state of the art, while being more computationally efficient than other available alternatives. The compressor uses two novel integer-based implementations of the predictive coding and expert advice schemes for multi-channel signals. We also develop a version of GSC optimized for EEG data. This version manages to significantly lower compression times while attaining similar compression performance for that specic type of signal. In a second scenario we study the compression of DNA sequencing data produced by nanopore sequencing technologies. We present two novel lossless compression algorithms specifically tailored to nanopore FASTQ files. ENANO is a reference-free compressor, which mainly focuses on the compression of quality scores. It achieves state of the art compression performance, while being fast and with low memory consumption when compared to other popular FASTQ compression tools. On the other hand, RENANO is a reference-based compressor, which improves on ENANO, by providing a more efficient base call sequence compression component. For RENANO two algorithms are introduced, corresponding to the following scenarios: a reference genome is available without cost to both the compressor and the decompressor; and the reference genome is available only on the compressor side, and a compacted version of the reference is included in the compressed le. Both algorithms of RENANO significantly improve the compression performance of ENANO, with similar compression times, and higher memory requirements.La generación masiva de información digital biológica da lugar a múltiples desafíos informáticos, como su almacenamiento y transmisión. Por ejemplo, las señales biomédicas, como los electroencefalogramas (EEG), son generadas por múltiples sensores registrando medidas en simultaneo durante largos períodos de tiempo, generando grandes volúmenes de datos. Otro ejemplo son los datos de secuenciación de ADN, en donde la cantidad de datos a nivel mundial esta creciendo de forma explosiva, lo que da lugar a una gran necesidad de recursos de procesamiento, almacenamiento y transmisión. En esta tesis investigamos como aplicar técnicas de compresión de datos para atacar este problema, en dos escenarios diferentes donde la eficiencia computacional juega un rol importante. Primero estudiamos la compresión de señales biomédicas multicanal. Comenzamos presentando un nuevo compresor de datos sin perdida para señales multicanal, GSC, que logra obtener niveles de compresión en el estado del arte y que al mismo tiempo es mas eficiente computacionalmente que otras alternativas disponibles. El compresor utiliza dos nuevas implementaciones de los esquemas de codificación predictiva y de asesoramiento de expertos para señales multicanal, basadas en aritmética de enteros. También presentamos una versión de GSC optimizada para datos de EEG. Esta versión logra reducir significativamente los tiempos de compresión, sin deteriorar significativamente los niveles de compresión para datos de EEG. En un segundo escenario estudiamos la compresión de datos de secuenciación de ADN generados por tecnologías de secuenciación por nanoporos. En este sentido, presentamos dos nuevos algoritmos de compresión sin perdida, específicamente diseñados para archivos FASTQ generados por tecnología de nanoporos. ENANO es un compresor libre de referencia, enfocado principalmente en la compresión de los valores de calidad de las bases. ENANO alcanza niveles de compresión en el estado del arte, siendo a la vez mas eficiente computacionalmente que otras herramientas populares de compresión de archivos FASTQ. Por otro lado, RENANO es un compresor basado en la utilización de una referencia, que mejora el rendimiento de ENANO, a partir de un nuevo esquema de compresión de las secuencias de bases. Presentamos dos variantes de RENANO, correspondientes a los siguientes escenarios: (i) se tiene a disposición un genoma de referencia, tanto del lado del compresor como del descompresor, y (ii) se tiene un genoma de referencia disponible solo del lado del compresor, y se incluye una versión compacta de la referencia en el archivo comprimido. Ambas variantes de RENANO mejoran significativamente los niveles compresión de ENANO, alcanzando tiempos de compresión similares y un mayor consumo de memoria

    Low-complexity algorithms for automatic detection of sleep stages and events for use in wearable EEG systems

    Get PDF
    Objective: Diagnosis of sleep disorders is an expensive procedure that requires performing a sleep study, known as polysomnography (PSG), in a controlled environment. This study monitors the neural, eye and muscle activity of a patient using electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) signals which are then scored in to different sleep stages. Home PSG is often cited as an alternative of clinical PSG to make it more accessible, however it still requires patients to use a cumbersome system with multiple recording channels that need to be precisely placed. This thesis proposes a wearable sleep staging system using a single channel of EEG. For realisation of such a system, this thesis presents novel features for REM sleep detection from EEG (normally detected using EMG/EOG), a low-complexity automatic sleep staging algorithm using a single EEG channel and its complete integrated circuit implementation. Methods: The difference between Spectral Edge Frequencies (SEF) at 95% and 50% in the 8-16 Hz frequency band is shown to have high discriminatory ability for detecting REM sleep stages. This feature, together with other spectral features from single-channel EEG are used with a set of decision trees controlled by a state machine for classification. The hardware for the complete algorithm is designed using low-power techniques and implemented on chip using 0.18μm process node technology. Results: The use of SEF features from one channel of EEG resulted in 83% of REM sleep epochs being correctly detected. The automatic sleep staging algorithm, based on contextually aware decision trees, resulted in an accuracy of up to 79% on a large dataset. Its hardware implementation, which is also the very first complete circuit level implementation of any sleep staging algorithm, resulted in an accuracy of 98.7% with great potential for use in fully wearable sleep systems.Open Acces

    A Hybrid Fuzzy Cognitive Map/Support Vector Machine Approach for EEG-Based Emotion Classification Using Compressed Sensing

    Full text link
    © 2018, Taiwan Fuzzy Systems Association and Springer-Verlag GmbH Germany, part of Springer Nature. Due to the high dimensional, non-stationary and non-linear properties of electroencephalogram (EEG), a significant portion of research on EEG analysis remains unknown. In this paper, a novel approach to EEG-based human emotion study is presented using Big Data methods with a hybrid classifier. An EEG dataset is firstly compressed using compressed sensing, then, wavelet transform features are extracted, and a hybrid Support Vector Machine (SVM) and Fuzzy Cognitive Map classifier is designed. The compressed data is only one-fourth of the original size, and the hybrid classifier has the average accuracy by 73.32%. Comparing to a single SVM classifier, the average accuracy is improved by 3.23%. These outcomes show that psychological signal can be compressed without the sparsity identity. The stable and high accuracy classification system demonstrates that EEG signal can detect human emotion, and the findings further prove the existence of the inter-relationship between various regions of the brain

    Intracranial EEG fluctuates over months after implanting electrodes in human brain.

    Get PDF
    OBJECTIVE: Implanting subdural and penetrating electrodes in the brain causes acute trauma and inflammation that affect intracranial electroencephalographic (iEEG) recordings. This behavior and its potential impact on clinical decision-making and algorithms for implanted devices have not been assessed in detail. In this study we aim to characterize the temporal and spatial variability of continuous, prolonged human iEEG recordings. APPROACH: Intracranial electroencephalography from 15 patients with drug-refractory epilepsy, each implanted with 16 subdural electrodes and continuously monitored for an average of 18 months, was included in this study. Time and spectral domain features were computed each day for each channel for the duration of each patient\u27s recording. Metrics to capture post-implantation feature changes and inflexion points were computed on group and individual levels. A linear mixed model was used to characterize transient group-level changes in feature values post-implantation and independent linear models were used to describe individual variability. MAIN RESULTS: A significant decline in features important to seizure detection and prediction algorithms (mean line length, energy, and half-wave), as well as mean power in the Berger and high gamma bands, was observed in many patients over 100 d following implantation. In addition, spatial variability across electrodes declines post-implantation following a similar timeframe. All selected features decreased by 14-50% in the initial 75 d of recording on the group level, and at least one feature demonstrated this pattern in 13 of the 15 patients. Our findings indicate that iEEG signal features demonstrate increased variability following implantation, most notably in the weeks immediately post-implant. SIGNIFICANCE: These findings suggest that conclusions drawn from iEEG, both clinically and for research, should account for spatiotemporal signal variability and that properly assessing the iEEG in patients, depending upon the application, may require extended monitoring
    corecore