271 research outputs found

    A Comparison of ICA versus genetic algorithm optimized ICA for use in non-invasive muscle tissue EMG

    Get PDF
    Includes bibliographical references.The patent developed by Dr. L. John [1] allows for the the detection of deep muscle activation through the combination of specially positioned monopolar surface Electromyography (sEMG) electrodes and a Blind Source Separation algorithm. This concept was then proved by Morowasi and John [2] in a 12 electrode prototype system around the bicep. This proof of concept showed that it was possible to extract the deep tissue activity of the brachialis muscle in the upper arm, however, the effect of surface electrode positioning and effectual number of electrodes on signal quality is still unclear. The hope of this research is to extend this work. In this research, a genetic algorithm (GA) is implemented on top of the Fast Independent Component Analysis (FastICA) algorithm to reduce the number of electrodes needed to isolate the activity from all muscles in the upper arm, including deep tissue. The GA selects electrodes based on the amount of significant information they contribute to the ICA solution and by doing so, a reduced electrode set is generated and alternative electrode positions are identified. This allows a near optimal electrode configuration to be produced for each user. The benefits of this approach are: 1.The generalized electrode array and this algorithm can select the near optimal electrode arrangement with very minimal understanding of the underlying anatomy. 2. It can correct for small anatomical differences between test subjects and act as a calibration phase for individuals. As with any design there are also disadvantages, such as each user needs to have the electrode placement specifically customised for him or her and this process needs to be conducted using a higher number of electrodes to begin with

    FPGA Implementation of Blind Source Separation using FastICA

    Get PDF
    Fast Independent Component Analysis (FastICA) is a statistical method used to separate signals from an unknown mixture without any prior knowledge about the signals. This method has been used in many applications like the separation of fetal and maternal Electrocardiogram (ECG) for pregnant women. This thesis presents an implementation of a fixed-point FastICA in field programmable gate array (FPGA). The proposed design can separate up to four signals using four sensors. QR decomposition is used to improve the speed of evaluation of the eigenvalues and eigenvectors of the covariance matrix. Moreover, a symmetric orthogonalization of the unit estimation algorithm is implemented using an iterative technique to speed up the search algorithm for higher order data input. The hardware is implemented using Xilinx virtex5-XC5VLX50t chip. The proposed design can process 128 samples for the four sensors in less than 63 ns when the design is simulated using 10 MHz clock

    Research on performance enhancement for electromagnetic analysis and power analysis in cryptographic LSI

    Get PDF
    制度:新 ; 報告番号:甲3785号 ; 学位の種類:博士(工学) ; 授与年月日:2012/11/19 ; 早大学位記番号:新6161Waseda Universit

    FGPA Implementation of Low-Complexity ICA Based Blind Multiple-Input-Multiple-Output OFDM Receivers

    Get PDF
    In this thesis Independent Component Analysis (ICA) based methods are used for blind detection in MIMO systems. ICA relies on higher order statistics (HOS) to recover the transmitted streams from the received mixture. Blind separation of the mixture is achieved based on the assumption of mutual statistical independence of the source streams. The use of HOS makes ICA methods less sensitive to Gaussian noise. ICA increase the spectral efficiency compared to conventional systems, without any training/pilot data required. ICA is usually used for blind source separation (BSS) from their mixtures by measuring non-Gaussianity using Kurtosis. Many scientific problems require FP arithmetic with high precision in their calculations. Moreover a large dynamic range of numbers is necessary for signal processing. FP arithmetic has the ability to automatically scale numbers and allows numbers to be represented in a wider range than fixed-point arithmetic. Nevertheless, FP algorithm is difficult to implement on the FPGA, because the algorithm is so complex that the area (logic elements) of FPGA leads to excessive consumption when implemented. A simplified 32-bit FP implementation includes adder, Subtractor, multiplier, divider, and square rooter The FPGA design is based on a hierarchical concept, and the experimental results of the design are presented

    Development of Novel Independent Component Analysis Techniques and their Applications

    Get PDF
    Real world problems very often provide minimum information regarding their causes. This is mainly due to the system complexities and noninvasive techniques employed by scientists and engineers to study such systems. Signal and image processing techniques used for analyzing such systems essentially tend to be blind. Earlier, training signal based techniques were used extensively for such analyses. But many times either these training signals are not practicable to be availed by the analyzer or become burden on the system itself. Hence blind signal/image processing techniques are becoming predominant in modern real time systems. In fact, blind signal processing has become a very important topic of research and development in many areas, especially biomedical engineering, medical imaging, speech enhancement, remote sensing, communication systems, exploration seismology, geophysics, econometrics, data mining, sensor networks etc. Blind Signal Processing has three major areas: Blind Signal Separation and Extraction, Independent Component Analysis (ICA) and Multichannel Blind Deconvolution and Equalization. ICA technique has also been typically applied to the other two areas mentioned above. Hence ICA research with its wide range of applications is quite interesting and has been taken up as the central domain of the present work

    Optimization of a hardware/software coprocessing platform for EEG eyeblink detection and removal

    Get PDF
    The feasibility of implementing a real-time system for removing eyeblink artifacts from electroencephalogram (EEG) recordings utilizing a hardware/software coprocessing platform was investigated. A software based wavelet and independent component analysis (ICA) eyeblink detection and removal process was extended to enable variation in its processing parameters. Exploiting the efficiency of hardware and the reconfigurability of software, it was ported to a field programmable gate array (FPGA) development platform which was found to be capable of implementing the revised algorithm, although not in real-time. The implemented hardware and software solution was applied to a collection of both simulated and clinically acquired EEG data with known artifact and waveform characteristics to assess its speed and accuracy. Configured for optimal accuracy in terms of minimal false positives and negatives as well as maintaining the integrity of the underlying EEG, especially when encountering EEG waveform patterns with an appearance similar to eyeblink artifacts, the system was capable of processing a 10 second EEG epoch in an average of 123 seconds. Configured for efficiency, but with diminished accuracy, the system required an average of 34 seconds. Varying the ICA contrast function showed that the gaussian nonlinearity provided the best combination of reliability and accuracy, albeit with a long execution time. The cubic nonlinearity was fast, but unreliable, while the hyperbolic tangent contrast function frequently diverged. It is believed that the utilization of programmable logic with increased logic capacity and processing speed may enable this approach to achieve the objective of real-time operation

    Dimensionality reduction using parallel ICA and its implementation on FPGA in hyperspectral image analysis

    Get PDF
    Hyperspectral images, although providing abundant information of the object, also bring high computational burden to data processing. This thesis studies the challenging problem of dimensionality reduction in Hyperspectral Image (HSI) analysis. Currently, there are two methods to reduce the dimension: band selection and feature extraction. This thesis presents a band selection technique based on Independent Component Analysis (ICA), an unsupervised signal separation algorithm. Given only the observations of hyperspectral images, the ICA –based band selection picks the independent bands which contain most of the spectral information of the original images. Due to the high volume of hyperspectral images, ICA -based band selection is a time consuming process. This thesis develops a parallel ICA algorithm which divides the decorrelation process into internal decorrelation and external decorrelation such that computation burden can be distributed from single processor to multiple processors, and the ICA process can be run in a parallel mode. Hardware implementation is always a faster and real -time solution to HSI analysis. Until now, there are few hardware designs for ICA -related processes. This thesis synthesizes the parallel ICA -based band selection on Field Programmable Gate Array (FPGA), which is the best choice for moderate designs and fast implementations. Compared to other design syntheses, the synthesis present in this thesis develops three ICA re-configurable components for the purpose of reusability. In addition, this thesis demonstrates the relationship between the design and the capacity utilization of a single FPGA, then discusses the features of High Performance Reconfigurable Computing (HPRC) to accomodate large capacity and design requirements. Experiments are conducted on three data sets obtained from different sources. Experimental results show the effectiveness of the proposed ICA -based band selection, parallel ICA and its synthesis on FPGA

    IMPLEMENTATION OF NOISE CANCELLATION WITH HARDWARE DESCRIPTION LANGUAGE

    Get PDF
    The objective of this project is to implement noise cancellation technique on an FPGA using Hardware Description Language. The performance of several adaptive algorithms is compared to determine the desirable algorithm used for adaptive noise cancellation system. The project will focus on the implementation of adaptive filter with least-meansquares (LMS) algorithm or normalized least-mean-squares (NLMS) algorithm to cancel acoustic noises. This noise consists of extraneous or unwanted waveforms that can interfere with communication. Due to the simplicity and effectiveness of adaptive noise cancellation technique, it is used to remove the noise component from the desired signal. The project is divided into four main parts: research, Matlab simulation, ModelSim simulation and hardware implementation. The project starts with research on several noise cancellation techniques, and then with Matlab code, Simulink and FDA tool, the adaptive noise cancellation system is designed with the implementation of the LMS algorithm, NLMS algorithm and recursive-least-square algorithm to remove the interference noise. By using the Matlab code and Simulink, the noise that interfered with a sinusoidal signal and a record of music can be removed. The original signal in turns can be retrieved from the noise corrupted signal by changing the coefficient of the filter. Since filter is the important component in adaptive filtering process, the filter is designed first before adding adaptive algorithm. A Finite Impulse Response (FIR) filter is designed and the desired result of functional simulation and timing simulation is obtained through ModelSim and Integrated Software Environment (ISE) software and FPGA implementation. Finally the adaptive algorithm is added to the filter, and implemented in the FPGA. The noise is greatly reduced in Matlab simulation, functional simulation and timing simulation. Hence the results of this project show that noise cancellation with adaptive filter is feasible

    Automated and Reliable Low-Complexity SoC Design Methodology for EEG Artefacts Removal

    Get PDF
    EEG is a non-invasive tool for neurodevelopmental disorder diagnosis (NDD) and treatment. However, EEG signal is mixed with other biological signals including Ocular and Muscular artefacts making it difficult to extract the diagnostic features. Therefore, the contaminated EEG channels are often discarded by the medical practitioners which may result in less accurate diagnosis. Independent Component Analysis (ICA) and wavelet-based algorithms require reference electrodes, which will create discomfort to the patient/children and cause hindrance to the diagnosis of the NDD and Brain Computer Interface (BCI). Therefore, it would be ideal if these artefacts can be removed real time and on hardware platform in an automated fashion and denoised EEG can be used for online diagnosis in a pervasive personalised healthcare environment without the need of any reference electrode. In this thesis we propose a reliable, robust and automated methodology to solve the aforementioned problem and its subsequent hardware implementation results are also presented. 100 EEG data from Physionet, Klinik fur Epileptologie, Universitat Bonn, Germany, Caltech EEG databases and 3 EEG data from 3 subjects from University of Southampton, UK have been studied and nine exhaustive case studies comprising of real and simulated data have been formulated and tested. The performance of the proposed methodology is measured in terms of correlation, regression and R-square statistics and the respective values lie above 80%, 79% and 65% with the gain in hardware complexity of 64.28% and hardware delay 53.58% compared to state-ofthe art approach. We believe the proposed methodology would be useful in next generation of pervasive healthcare for BCI and NDD diagnosis and treatment
    corecore