17 research outputs found

    Multichannel blind separation of sources algorithm based on cross-cumulant and the Levenberg-Marquardt method

    Full text link

    NEW IMAGE ENCRYPTION METHOD BASED ON ICA

    Get PDF
    ABSTRACT In the last decade, Independent component analysis (ICA) becomes one of the most important signal processing tools. Many algorithms have been proposed to separate successfully monodimensional signals from their observed mixed signals. Recently, ICA has been applied to face recognition problem. In this manuscript, a new idea for image encryption and decryption schemes, based on ICA, is proposed. Using some mixing procedure as an encryption method, one can hide useful information transmitted over wireless channels. The main idea of our approach is to secure the transmitted information at two levels: classical level using standard keys and second level (spatial diversity) using independent transmitters. In the second level, a hacker should intercept not one channel but all of them in order to retrieve the information. At designed receiver, one can easily apply ICA algorithms to decrypt the received signals and retrieve the information

    Role of independent component analysis in intelligent ECG signal processing

    Get PDF
    The Electrocardiogram (ECG) reflects the activities and the attributes of the human heart and reveals very important hidden information in its structure. The information is extracted by means of ECG signal analysis to gain insights that are very crucial in explaining and identifying various pathological conditions. The feature extraction process can be accomplished directly by an expert through, visual inspection of ECGs printed on paper or displayed on a screen. However, the complexity and the time taken for the ECG signals to be visually inspected and manually analysed means that it‟s a very tedious task thus yielding limited descriptions. In addition, a manual ECG analysis is always prone to errors: human oversights. Moreover ECG signal processing has become a prevalent and effective tool for research and clinical practices. A typical computer based ECG analysis system includes a signal preprocessing, beats detection and feature extraction stages, followed by classification.Automatic identification of arrhythmias from the ECG is one important biomedical application of pattern recognition. This thesis focuses on ECG signal processing using Independent Component Analysis (ICA), which has received increasing attention as a signal conditioning and feature extraction technique for biomedical application. Long term ECG monitoring is often required to reliably identify the arrhythmia. Motion induced artefacts are particularly common in ambulatory and Holter recordings, which are difficult to remove with conventional filters due to their similarity to the shape of ectopic xiiibeats. Feature selection has always been an important step towards more accurate, reliable and speedy pattern recognition. Better feature spaces are also sought after in ECG pattern recognition applications. Two new algorithms are proposed, developed and validated in this thesis, one for removing non-trivial noises in ECGs using the ICA and the other deploys the ICA extracted features to improve recognition of arrhythmias. Firstly, independent component analysis has been studiedand found effective in this PhD project to separate out motion induced artefacts in ECGs, the independent component corresponding to noise is then removed from the ECG according to kurtosis and correlation measurement.The second algorithm has been developed for ECG feature extraction, in which the independent component analysis has been used to obtain a set of features, or basis functions of the ECG signals generated hypothetically by different parts of the heart during the normal and arrhythmic cardiac cycle. ECGs are then classified based on the basis functions along with other time domain features. The selection of the appropriate feature set for classifier has been found important for better performance and quicker response. Artificial neural networks based pattern recognition engines are used to perform final classification to measure the performance of ICA extracted features and effectiveness of the ICA based artefacts reduction algorithm.The motion artefacts are effectively removed from the ECG signal which is shown by beat detection on noisy and cleaned ECG signals after ICA processing. Using the ICA extracted feature sets classification of ECG arrhythmia into eight classes with fewer independent components and very high classification accuracy is achieved

    Adaptive antenna array beamforming using a concatenation of recursive least square and least mean square algorithms

    Get PDF
    In recent years, adaptive or smart antennas have become a key component for various wireless applications, such as radar, sonar and cellular mobile communications including worldwide interoperability for microwave access (WiMAX). They lead to an increase in the detection range of radar and sonar systems, and the capacity of mobile radio communication systems. These antennas are used as spatial filters for receiving the desired signals coming from a specific direction or directions, while minimizing the reception of unwanted signals emanating from other directions.Because of its simplicity and robustness, the LMS algorithm has become one of the most popular adaptive signal processing techniques adopted in many applications, including antenna array beamforming. Over the last three decades, several improvements have been proposed to speed up the convergence of the LMS algorithm. These include the normalized-LMS (NLMS), variable-length LMS algorithm, transform domain algorithms, and more recently the constrained-stability LMS (CSLMS) algorithm and modified robust variable step size LMS (MRVSS) algorithm. Yet another approach for attempting to speed up the convergence of the LMS algorithm without having to sacrifice too much of its error floor performance, is through the use of a variable step size LMS (VSSLMS) algorithm. All the published VSSLMS algorithms make use of an initial large adaptation step size to speed up the convergence. Upon approaching the steady state, smaller step sizes are then introduced to decrease the level of adjustment, hence maintaining a lower error floor. This convergence improvement of the LMS algorithm increases its complexity from 2N in the case of LMS algorithm to 9N in the case of the MRVSS algorithm, where N is the number of array elements.An alternative to the LMS algorithm is the RLS algorithm. Although higher complexity is required for the RLS algorithm compared to the LMS algorithm, it can achieve faster convergence, thus, better performance compared to the LMS algorithm. There are also improvements that have been made to the RLS algorithm families to enhance tracking ability as well as stability. Examples are, the adaptive forgetting factor RLS algorithm (AFF-RLS), variable forgetting factor RLS (VFFRLS) and the extended recursive least squares (EX-KRLS) algorithm. The multiplication complexity of VFFRLS, AFF-RLS and EX-KRLS algorithms are 2.5N2 + 3N + 20 , 9N2 + 7N , and 15N3 + 7N2 + 2N + 4 respectively, while the RLS algorithm requires 2.5N2 + 3N .All the above well known algorithms require an accurate reference signal for their proper operation. In some cases, several additional operating parameters should be specified. For example, MRVSS needs twelve predefined parameters. As a result, its performance highly depends on the input signal.In this study, two adaptive beamforming algorithms have been proposed. They are called recursive least square - least mean square (RLMS) algorithm, and least mean square - least mean square (LLMS) algorithm. These algorithms have been proposed for meeting future beamforming requirements, such as very high convergence rate, robust to noise and flexible modes of operation. The RLMS algorithm makes use of two individual algorithm stages, based on the RLS and LMS algorithms, connected in tandem via an array image vector. On the other hand, the LLMS algorithm is a simpler version of the RLMS algorithm. It makes use of two LMS algorithm stages instead of the RLS – LMS combination as used in the RLMS algorithm.Unlike other adaptive beamforming algorithms, for both of these algorithms, the error signal of the second algorithm stage is fed back and combined with the error signal of the first algorithm stage to form an overall error signal for use update the tap weights of the first algorithm stage.Upon convergence, usually after few iterations, the proposed algorithms can be switched to the self-referencing mode. In this mode, the entire algorithm outputs are swapped, replacing their reference signals. In moving target applications, the array image vector, F, should also be updated to the new position. This scenario is also studied for both proposed algorithms. A simple and effective method for calculate the required array image vector is also proposed. Moreover, since the RLMS and the LLMS algorithms employ the array image vector in their operation, they can be used to generate fixed beams by pre-setting the values of the array image vector to the specified direction.The convergence of RLMS and LLMS algorithms is analyzed for two different operation modes; namely with external reference or self-referencing. Array image vector calculations, ranges of step sizes values for stable operation, fixed beam generation, and fixed-point arithmetic have also been studied in this thesis. All of these analyses have been confirmed by computer simulations for different signal conditions. Computer simulation results show that both proposed algorithms are superior in convergence performances to the algorithms, such as the CSLMS, MRVSS, LMS, VFFRLS and RLS algorithms, and are quite insensitive to variations in input SNR and the actual step size values used. Furthermore, RLMS and LLMS algorithms remain stable even when their reference signals are corrupted by additive white Gaussian noise (AWGN). In addition, they are robust when operating in the presence of Rayleigh fading. Finally, the fidelity of the signal at the output of the proposed algorithms beamformers is demonstrated by means of the resultant values of error vector magnitude (EVM), and scatter plots. It is also shown that, the implementation of an eight element uniform linear array using the proposed algorithms with a wordlength of nine bits is sufficient to achieve performance close to that provided by full precision

    New Horizons in Time-Domain Diffuse Optical Spectroscopy and Imaging

    Get PDF
    Jöbsis was the first to describe the in vivo application of near-infrared spectroscopy (NIRS), also called diffuse optical spectroscopy (DOS). NIRS was originally designed for the clinical monitoring of tissue oxygenation, and today it has also become a useful tool for neuroimaging studies (functional near-infrared spectroscopy, fNIRS). However, difficulties in the selective and quantitative measurements of tissue hemoglobin (Hb), which have been central in the NIRS field for over 40 years, remain to be solved. To overcome these problems, time-domain (TD) and frequency-domain (FD) measurements have been tried. Presently, a wide range of NIRS instruments are available, including commonly available commercial instruments for continuous wave (CW) measurements, based on the modified Beer–Lambert law (steady-state domain measurements). Among these measurements, the TD measurement is the most promising approach, although compared with CW and FD measurements, TD measurements are less common, due to the need for large and expensive instruments with poor temporal resolution and limited dynamic range. However, thanks to technological developments, TD measurements are increasingly being used in research, and also in various clinical settings. This Special Issue highlights issues at the cutting edge of TD DOS and diffuse optical tomography (DOT). It covers all aspects related to TD measurements, including advances in hardware, methodology, the theory of light propagation, and clinical applications

    Super-resolution microscopy live cell imaging and image analysis

    Get PDF
    Novel fundamental research results provided new techniques going beyond the diffraction limit. These recent advances known as super-resolution microscopy have been awarded by the Nobel Prize as they promise new discoveries in biology and live sciences. All these techniques rely on complex signal and image processing. The applicability in biology, and particularly for live cell imaging, remains challenging and needs further investigation. Focusing on image processing and analysis, the thesis is devoted to a significant enhancement of structured illumination microscopy (SIM) and super-resolution optical fluctuation imaging (SOFI)methods towards fast live cell and quantitative imaging. The thesis presents a novel image reconstruction method for both 2D and 3D SIM data, compatible with weak signals, and robust towards unwanted image artifacts. This image reconstruction is efficient under low light conditions, reduces phototoxicity and facilitates live cell observations. We demonstrate the performance of our new method by imaging long super-resolution video sequences of live U2-OS cells and improving cell particle tracking. We develop an adapted 3D deconvolution algorithm for SOFI, which suppresses noise and makes 3D SOFI live cell imaging feasible due to reduction of the number of required input images. We introduce a novel linearization procedure for SOFI maximizing the resolution gain and show that SOFI and PALM can both be applied on the same dataset revealing more insights about the sample. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of the sample through the estimation of molecular parameters. For quantifying the outcome of our super-resolutionmethods, the thesis presents a novel methodology for objective image quality assessment measuring spatial resolution and signal to noise ratio in real samples. We demonstrate our enhanced SOFI framework by high throughput 3D imaging of live HeLa cells acquiring the whole super-resolution 3D image in 0.95 s, by investigating focal adhesions in live MEF cells, by fast optical readout of fluorescently labelled DNA strands and by unraveling the nanoscale organization of CD4 proteins on a plasma membrane of T-cells. Within the thesis, unique open-source software packages SIMToolbox and SOFI simulation tool were developed to facilitate implementation of super-resolution microscopy methods

    Algorithmic Analysis Techniques for Molecular Imaging

    Get PDF
    This study addresses image processing techniques for two medical imaging modalities: Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI), which can be used in studies of human body functions and anatomy in a non-invasive manner. In PET, the so-called Partial Volume Effect (PVE) is caused by low spatial resolution of the modality. The efficiency of a set of PVE-correction methods is evaluated in the present study. These methods use information about tissue borders which have been acquired with the MRI technique. As another technique, a novel method is proposed for MRI brain image segmen- tation. A standard way of brain MRI is to use spatial prior information in image segmentation. While this works for adults and healthy neonates, the large variations in premature infants preclude its direct application. The proposed technique can be applied to both healthy and non-healthy premature infant brain MR images. Diffusion Weighted Imaging (DWI) is a MRI-based technique that can be used to create images for measuring physiological properties of cells on the structural level. We optimise the scanning parameters of DWI so that the required acquisition time can be reduced while still maintaining good image quality. In the present work, PVE correction methods, and physiological DWI models are evaluated in terms of repeatabilityof the results. This gives in- formation on the reliability of the measures given by the methods. The evaluations are done using physical phantom objects, correlation measure- ments against expert segmentations, computer simulations with realistic noise modelling, and with repeated measurements conducted on real pa- tients. In PET, the applicability and selection of a suitable partial volume correction method was found to depend on the target application. For MRI, the data-driven segmentation offers an alternative when using spatial prior is not feasible. For DWI, the distribution of b-values turns out to be a central factor affecting the time-quality ratio of the DWI acquisition. An optimal b-value distribution was determined. This helps to shorten the imaging time without hampering the diagnostic accuracy.Siirretty Doriast

    Advanced tensor based signal processing techniques for wireless communication systems and biomedical signal processing

    Get PDF
    Many observed signals in signal processing applications including wireless communications, biomedical signal processing, image processing, and machine learning are multi-dimensional. Tensors preserve the multi-dimensional structure and provide a natural representation of these signals/data. Moreover, tensors provide often an improved identifiability. Therefore, we benefit from using tensor algebra in the above mentioned applications and many more. In this thesis, we present the benefits of utilizing tensor algebra in two signal processing areas. These include signal processing for MIMO (Multiple-Input Multiple-Output) wireless communication systems and biomedical signal processing. Moreover, we contribute to the theoretical aspects of tensor algebra by deriving new properties and ways of computing tensor decompositions. Often, we only have an element-wise or a slice-wise description of the signal model. This representation of the signal model does not reveal the explicit tensor structure. Therefore, the derivation of all tensor unfoldings is not always obvious. Consequently, exploiting the multi-dimensional structure of these models is not always straightforward. We propose an alternative representation of the element-wise multiplication or the slice-wise multiplication based on the generalized tensor contraction operator. Later in this thesis, we exploit this novel representation and the properties of the contraction operator such that we derive the final tensor models. There exist a number of different tensor decompositions that describe different signal models such as the HOSVD (Higher Order Singular Value Decomposition), the CP/PARAFAC (Canonical Polyadic / PARallel FACtors) decomposition, the BTD (Block Term Decomposition), the PARATUCK2 (PARAfac and TUCker2) decomposition, and the PARAFAC2 (PARAllel FACtors2) decomposition. Among these decompositions, the CP decomposition is most widely spread and used. Therefore, the development of algorithms for the efficient computation of the CP decomposition is important for many applications. The SECSI (Semi-Algebraic framework for approximate CP decomposition via SImultaneaous matrix diagonalization) framework is an efficient and robust tool for the calculation of the approximate low-rank CP decomposition via simultaneous matrix diagonalizations. In this thesis, we present five extensions of the SECSI framework that reduce the computational complexity of the original framework and/or introduce constraints to the factor matrices. Moreover, the PARAFAC2 decomposition and the PARATUCK2 decomposition are usually described using a slice-wise notation that can be expressed in terms of the generalized tensor contraction as proposed in this thesis. We exploit this novel representation to derive explicit tensor models for the PARAFAC2 decomposition and the PARATUCK2 decomposition. Furthermore, we use the PARAFAC2 model to derive an ALS (Alternating Least-Squares) algorithm for the computation of the PARAFAC2 decomposition. Moreover, we exploit the novel contraction properties for element wise and slice-wise multiplications to model MIMO multi-carrier wireless communication systems. We show that this very general model can be used to derive the tensor model of the received signal for MIMO-OFDM (Multiple-Input Multiple-Output - Orthogonal Frequency Division Multiplexing), Khatri-Rao coded MIMO-OFDM, and randomly coded MIMO-OFDM systems. We propose the transmission techniques Khatri-Rao coding and random coding in order to impose an additional tensor structure of the transmit signal tensor that otherwise does not have a particular structure. Moreover, we show that this model can be extended to other multi-carrier techniques such as GFDM (Generalized Frequency Division Multiplexing). Utilizing these models at the receiver side, we design several types for receivers for these systems that outperform the traditional matrix based solutions in terms of the symbol error rate. In the last part of this thesis, we show the benefits of using tensor algebra in biomedical signal processing by jointly decomposing EEG (ElectroEncephaloGraphy) and MEG (MagnetoEncephaloGraphy) signals. EEG and MEG signals are usually acquired simultaneously, and they capture aspects of the same brain activity. Therefore, EEG and MEG signals can be decomposed using coupled tensor decompositions such as the coupled CP decomposition. We exploit the proposed coupled SECSI framework (one of the proposed extensions of the SECSI framework) for the computation of the coupled CP decomposition to first validate and analyze the photic driving effect. Moreover, we validate the effects of scull defects on the measurement EEG and MEG signals by means of a joint EEG-MEG decomposition using the coupled SECSI framework. Both applications show that we benefit from coupled tensor decompositions and the coupled SECSI framework is a very practical tool for the analysis of biomedical data.Zahlreiche messbare Signale in verschiedenen Bereichen der digitalen Signalverarbeitung, z.B. in der drahtlosen Kommunikation, im Mobilfunk, biomedizinischen Anwendungen, der Bild- oder akustischen Signalverarbeitung und dem maschinellen Lernen sind mehrdimensional. Tensoren erhalten die mehrdimensionale Struktur und stellen eine natürliche Darstellung dieser Signale/Daten dar. Darüber hinaus bieten Tensoren oft eine verbesserte Trennbarkeit von enthaltenen Signalkomponenten. Daher profitieren wir von der Verwendung der Tensor-Algebra in den oben genannten Anwendungen und vielen mehr. In dieser Arbeit stellen wir die Vorteile der Nutzung der Tensor-Algebra in zwei Bereichen der Signalverarbeitung vor: drahtlose MIMO (Multiple-Input Multiple-Output) Kommunikationssysteme und biomedizinische Signalverarbeitung. Darüber hinaus tragen wir zu theoretischen Aspekten der Tensor-Algebra bei, indem wir neue Eigenschaften und Berechnungsmethoden für die Tensor-Zerlegung ableiten. Oftmals verfügen wir lediglich über eine elementweise oder ebenenweise Beschreibung des Signalmodells, welche nicht die explizite Tensorstruktur zeigt. Daher ist die Ableitung aller Tensor-Unfoldings nicht offensichtlich, wodurch die multidimensionale Struktur dieser Modelle nicht trivial nutzbar ist. Wir schlagen eine alternative Darstellung der elementweisen Multiplikation oder der ebenenweisen Multiplikation auf der Grundlage des generalisierten Tensor-Kontraktionsoperators vor. Weiterhin nutzen wir diese neuartige Darstellung und deren Eigenschaften zur Ableitung der letztendlichen Tensor-Modelle. Es existieren eine Vielzahl von Tensor-Zerlegungen, die verschiedene Signalmodelle beschreiben, wie die HOSVD (Higher Order Singular Value Decomposition), CP/PARAFAC (Canonical Polyadic/ PARallel FACtors) Zerlegung, die BTD (Block Term Decomposition), die PARATUCK2-(PARAfac und TUCker2) und die PARAFAC2-Zerlegung (PARAllel FACtors2). Dabei ist die CP-Zerlegung am weitesten verbreitet und wird findet in zahlreichen Gebieten Anwendung. Daher ist die Entwicklung von Algorithmen zur effizienten Berechnung der CP-Zerlegung von besonderer Bedeutung. Das SECSI (Semi-Algebraic Framework for approximate CP decomposition via Simultaneaous matrix diagonalization) Framework ist ein effizientes und robustes Werkzeug zur Berechnung der approximierten Low-Rank CP-Zerlegung durch simultane Matrixdiagonalisierung. In dieser Arbeit stellen wir fünf Erweiterungen des SECSI-Frameworks vor, welche die Rechenkomplexität des ursprünglichen Frameworks reduzieren bzw. Einschränkungen für die Faktormatrizen einführen. Darüber hinaus werden die PARAFAC2- und die PARATUCK2-Zerlegung in der Regel mit einer ebenenweisen Notation beschrieben, die sich in Form der allgemeinen Tensor-Kontraktion, wie sie in dieser Arbeit vorgeschlagen wird, ausdrücken lässt. Wir nutzen diese neuartige Darstellung, um explizite Tensormodelle für diese beiden Zerlegungen abzuleiten. Darüber hinaus verwenden wir das PARAFAC2-Modell, um einen ALS-Algorithmus (Alternating Least-Squares) für die Berechnung der PARAFAC2-Zerlegungen abzuleiten. Weiterhin nutzen wir die neuartigen Kontraktionseigenschaften für elementweise und ebenenweise Multiplikationen, um MIMO Multi-Carrier-Mobilfunksysteme zu modellieren. Wir zeigen, dass dieses sehr allgemeine Modell verwendet werden kann, um das Tensor-Modell des empfangenen Signals für MIMO-OFDM- (Multiple- Input Multiple-Output - Orthogonal Frequency Division Multiplexing), Khatri-Rao codierte MIMO-OFDM- und zufällig codierte MIMO-OFDM-Systeme abzuleiten. Wir schlagen die Übertragungstechniken der Khatri-Rao-Kodierung und zufällige Kodierung vor, um eine zusätzliche Tensor-Struktur des Sendesignal-Tensors einzuführen, welcher gewöhnlich keine bestimmte Struktur aufweist. Darüber hinaus zeigen wir, dass dieses Modell auf andere Multi-Carrier-Techniken wie GFDM (Generalized Frequency Division Multiplexing) erweitert werden kann. Unter Verwendung dieser Modelle auf der Empfängerseite entwerfen wir verschiedene Typen von Empfängern für diese Systeme, die die traditionellen matrixbasierten Lösungen in Bezug auf die Symbolfehlerrate übertreffen. Im letzten Teil dieser Arbeit zeigen wir die Vorteile der Verwendung von Tensor-Algebra in der biomedizinischen Signalverarbeitung durch die gemeinsame Zerlegung von EEG-(ElectroEncephaloGraphy) und MEG- (MagnetoEncephaloGraphy) Signalen. Diese werden in der Regel gleichzeitig erfasst, wobei sie gemeinsame Aspekte derselben Gehirnaktivität beschreiben. Daher können EEG- und MEG-Signale mit gekoppelten Tensor-Zerlegungen wie der gekoppelten CP Zerlegung analysiert werden. Wir nutzen das vorgeschlagene gekoppelte SECSI-Framework (eine der vorgeschlagenen Erweiterungen des SECSI-Frameworks) für die Berechnung der gekoppelten CP Zerlegung, um zunächst den photic driving effect zu validieren und zu analysieren. Darüber hinaus validieren wir die Auswirkungen von Schädeldefekten auf die Messsignale von EEG und MEG durch eine gemeinsame EEG-MEG-Zerlegung mit dem gekoppelten SECSI-Framework. Beide Anwendungen zeigen, dass wir von gekoppelten Tensor-Zerlegungen profitieren, wobei die Methoden des gekoppelten SECSI-Frameworks erfolgreich zur Analyse biomedizinischer Daten genutzt werden können
    corecore