25 research outputs found

    Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm

    Get PDF
    The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms

    Efficient Blind Source Separation Algorithms with Applications in Speech and Biomedical Signal Processing

    Get PDF
    Blind source separation/extraction (BSS/BSE) is a powerful signal processing method and has been applied extensively in many fields such as biomedical sciences and speech signal processing, to extract a set of unknown input sources from a set of observations. Different algorithms of BSS were proposed in the literature, that need more investigations, related to the extraction approach, computational complexity, convergence speed, type of domain (time or frequency), mixture properties, and extraction performances. This work presents a three new BSS/BSE algorithms based on computing new transformation matrices used to extract the unknown signals. Type of signals considered in this dissertation are speech, Gaussian, and ECG signals. The first algorithm, named as the BSE-parallel linear predictor filter (BSE-PLP), computes a transformation matrix from the the covariance matrix of the whitened data. Then, use the matrix as an input to linear predictor filters whose coefficients being the unknown sources. The algorithm has very fast convergence in two iterations. Simulation results, using speech, Gaussian, and ECG signals, show that the model is capable of extracting the unknown source signals and removing noise when the input signal to noise ratio is varied from -20 dB to 80 dB. The second algorithm, named as the BSE-idempotent transformation matrix (BSE-ITM), computes its transformation matrix in iterative form, with less computational complexity. The proposed method is tested using speech, Gaussian, and ECG signals. Simulation results show that the proposed algorithm significantly separate the source signals with better performance measures as compared with other approaches used in the dissertation. The third algorithm, named null space idempotent transformation matrix (NSITM) has been designed using the principle of null space of the ITM, to separate the unknown sources. Simulation results show that the method is successfully separating speech, Gaussian, and ECG signals from their mixture. The algorithm has been used also to estimate average FECG heart rate. Results indicated considerable improvement in estimating the peaks over other algorithms used in this work

    Adaptive signal processing algorithms for noncircular complex data

    No full text
    The complex domain provides a natural processing framework for a large class of signals encountered in communications, radar, biomedical engineering and renewable energy. Statistical signal processing in C has traditionally been viewed as a straightforward extension of the corresponding algorithms in the real domain R, however, recent developments in augmented complex statistics show that, in general, this leads to under-modelling. This direct treatment of complex-valued signals has led to advances in so called widely linear modelling and the introduction of a generalised framework for the differentiability of both analytic and non-analytic complex and quaternion functions. In this thesis, supervised and blind complex adaptive algorithms capable of processing the generality of complex and quaternion signals (both circular and noncircular) in both noise-free and noisy environments are developed; their usefulness in real-world applications is demonstrated through case studies. The focus of this thesis is on the use of augmented statistics and widely linear modelling. The standard complex least mean square (CLMS) algorithm is extended to perform optimally for the generality of complex-valued signals, and is shown to outperform the CLMS algorithm. Next, extraction of latent complex-valued signals from large mixtures is addressed. This is achieved by developing several classes of complex blind source extraction algorithms based on fundamental signal properties such as smoothness, predictability and degree of Gaussianity, with the analysis of the existence and uniqueness of the solutions also provided. These algorithms are shown to facilitate real-time applications, such as those in brain computer interfacing (BCI). Due to their modified cost functions and the widely linear mixing model, this class of algorithms perform well in both noise-free and noisy environments. Next, based on a widely linear quaternion model, the FastICA algorithm is extended to the quaternion domain to provide separation of the generality of quaternion signals. The enhanced performances of the widely linear algorithms are illustrated in renewable energy and biomedical applications, in particular, for the prediction of wind profiles and extraction of artifacts from EEG recordings

    Brain signal analysis in space-time-frequency domain : an application to brain computer interfacing

    Get PDF
    In this dissertation, advanced methods for electroencephalogram (EEG) signal analysis in the space-time-frequency (STF) domain with applications to eye-blink (EB) artifact removal and brain computer interfacing (BCI) are developed. The two methods for EB artifact removal from EEGs are presented which respectively include the estimated spatial signatures of the EB artifacts into the signal extraction and the robust beamforming frameworks. In the developed signal extraction algorithm, the EB artifacts are extracted as uncorrelated signals from EEGs. The algorithm utilizes the spatial signatures of the EB artifacts as priori knowledge in the signal extraction stage. The spatial distributions are identified using the STF model of EEGs. In the robust beamforming approach, first a novel space-time-frequency/time-segment (STF-TS) model for EEGs is introduced. The estimated spatial signatures of the EBs are then taken into account in order to restore the artifact contaminated EEG measurements. Both algorithms are evaluated by using the simulated and real EEGs and shown to produce comparable results to that of conventional approaches. Finally, an effective paradigm for BCI is introduced. In this approach prior physiological knowledge of spectrally band limited steady-state movement related potentials is exploited. The results consolidate the method.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Single channel signal separation using pseudo-stereo model and time-freqency masking

    Get PDF
    PhD ThesisIn many practical applications, one sensor is only available to record a mixture of a number of signals. Single-channel blind signal separation (SCBSS) is the research topic that addresses the problem of recovering the original signals from the observed mixture without (or as little as possible) any prior knowledge of the signals. Given a single mixture, a new pseudo-stereo mixing model is developed. A “pseudo-stereo” mixture is formulated by weighting and time-shifting the original single-channel mixture. This creates an artificial resemblance of a stereo signal given by one location which results in the same time-delay but different attenuation of the source signals. The pseudo-stereo mixing model relaxes the underdetermined ill-conditions associated with monaural source separation and begets the advantage of the relationship of the signals between the readily observed mixture and the pseudo-stereo mixture. This research proposes three novel algorithms based on the pseudo-stereo mixing model and the binary time-frequency (TF) mask. Firstly, the proposed SCBSS algorithm estimates signals’ weighted coefficients from a ratio of the pseudo-stereo mixing model and then constructs a binary maximum likelihood TF masking for separating the observed mixture. Secondly, a mixture in noisy background environment is considered. Thus, a mixture enhancement algorithm has been developed and the proposed SCBSS algorithm is reformulated using an adaptive coefficients estimator. The adaptive coefficients estimator computes the signal characteristics for each time frame. This property is desirable for both speech and audio signals as they are aptly characterized as non-stationary AR processes. Finally, a multiple-time delay (MTD) pseudo-stereo SINGLE CHANNEL SIGNAL SEPARATION ii mixture is developed. The MTD mixture enhances the flexibility as well as the separability over the originally proposed pseudo-stereo mixing model. The separation algorithm of the MTD mixture has also been derived. Additionally, comparison analysis between the MTD mixture and the pseudo-stereo mixture has also been identified. All algorithms have been demonstrated by synthesized and real-audio signals. The performance of source separation has been assessed by measuring the distortion between original source and the estimated one according to the signal-to-distortion (SDR) ratio. Results show that all proposed SCBSS algorithms yield a significantly better separation performance with an average SDR improvement that ranges from 2.4dB to 5dB per source and they are computationally faster over the benchmarked algorithms.Payap University

    Brain signal analysis in space-time-frequency domain: an application to brain computer interfacing

    Get PDF
    In this dissertation, advanced methods for electroencephalogram (EEG) signal analysis in the space-time-frequency (STF) domain with applications to eye-blink (EB) artifact removal and brain computer interfacing (BCI) are developed. The two methods for EB artifact removal from EEGs are presented which respectively include the estimated spatial signatures of the EB artifacts into the signal extraction and the robust beamforming frameworks. In the developed signal extraction algorithm, the EB artifacts are extracted as uncorrelated signals from EEGs. The algorithm utilizes the spatial signatures of the EB artifacts as priori knowledge in the signal extraction stage. The spatial distributions are identified using the STF model of EEGs. In the robust beamforming approach, first a novel space-time-frequency/time-segment (STF-TS) model for EEGs is introduced. The estimated spatial signatures of the EBs are then taken into account in order to restore the artifact contaminated EEG measurements. Both algorithms are evaluated by using the simulated and real EEGs and shown to produce comparable results to that of conventional approaches. Finally, an effective paradigm for BCI is introduced. In this approach prior physiological knowledge of spectrally band limited steady-state movement related potentials is exploited. The results consolidate the method

    Acoustic event detection and localization using distributed microphone arrays

    Get PDF
    Automatic acoustic scene analysis is a complex task that involves several functionalities: detection (time), localization (space), separation, recognition, etc. This thesis focuses on both acoustic event detection (AED) and acoustic source localization (ASL), when several sources may be simultaneously present in a room. In particular, the experimentation work is carried out with a meeting-room scenario. Unlike previous works that either employed models of all possible sound combinations or additionally used video signals, in this thesis, the time overlapping sound problem is tackled by exploiting the signal diversity that results from the usage of multiple microphone array beamformers. The core of this thesis work is a rather computationally efficient approach that consists of three processing stages. In the first, a set of (null) steering beamformers is used to carry out diverse partial signal separations, by using multiple arbitrarily located linear microphone arrays, each of them composed of a small number of microphones. In the second stage, each of the beamformer output goes through a classification step, which uses models for all the targeted sound classes (HMM-GMM, in the experiments). Then, in a third stage, the classifier scores, either being intra- or inter-array, are combined using a probabilistic criterion (like MAP) or a machine learning fusion technique (fuzzy integral (FI), in the experiments). The above-mentioned processing scheme is applied in this thesis to a set of complexity-increasing problems, which are defined by the assumptions made regarding identities (plus time endpoints) and/or positions of sounds. In fact, the thesis report starts with the problem of unambiguously mapping the identities to the positions, continues with AED (positions assumed) and ASL (identities assumed), and ends with the integration of AED and ASL in a single system, which does not need any assumption about identities or positions. The evaluation experiments are carried out in a meeting-room scenario, where two sources are temporally overlapped; one of them is always speech and the other is an acoustic event from a pre-defined set. Two different databases are used, one that is produced by merging signals actually recorded in the UPC¿s department smart-room, and the other consists of overlapping sound signals directly recorded in the same room and in a rather spontaneous way. From the experimental results with a single array, it can be observed that the proposed detection system performs better than either the model based system or a blind source separation based system. Moreover, the product rule based combination and the FI based fusion of the scores resulting from the multiple arrays improve the accuracies further. On the other hand, the posterior position assignment is performed with a very small error rate. Regarding ASL and assuming an accurate AED system output, the 1-source localization performance of the proposed system is slightly better than that of the widely-used SRP-PHAT system, working in an event-based mode, and it even performs significantly better than the latter one in the more complex 2-source scenario. Finally, though the joint system suffers from a slight degradation in terms of classification accuracy with respect to the case where the source positions are known, it shows the advantage of carrying out the two tasks, recognition and localization, with a single system, and it allows the inclusion of information about the prior probabilities of the source positions. It is worth noticing also that, although the acoustic scenario used for experimentation is rather limited, the approach and its formalism were developed for a general case, where the number and identities of sources are not constrained

    Sparse representations of signals for information recovery from incomplete data

    Get PDF
    Mathematical modeling of inverse problems in imaging, such as inpainting, deblurring and denoising, results in ill-posed, i.e. underdetermined linearsystems. Sparseness constraintis used often to regularize these problems.That is because many classes of discrete signals (e.g. naturalimages), when expressed as vectors in a high-dimensional space, are sparse in some predefined basis or a frame(fixed or learned). An efficient approach to basis / frame learning is formulated using the independent component analysis (ICA)and biologically inspired linear model of sparse coding. In the learned basis, the inverse problem of data recovery and removal of impulsive noise is reduced to solving sparseness constrained underdetermined linear system of equations. The same situation occurs in bioinformatics data analysis when novel type of linear mixture model with a reference sample is employed for feature extraction. Extracted features can be used for disease prediction and biomarker identification

    General Interference Suppression Technique For Diversity Wireless Rece

    Get PDF
    The area of wireless transceiver design is becoming increasingly important due to the rapid growth of wireless communications market as well as diversified design specifications. Research efforts in this area concentrates on schemes that are capable of increasing the system capacity, providing reconfigurability/reprogrammability and reducing the hardware complexity. Emerging topics related to these goals include Software Defined Radio, Multiple-Input-Multiple-Output (MIMO) Systems, Code Division Multiple Access, Ultra-Wideband Systems, etc. This research adopts space diversity and statistical signal processing for digital interference suppression in wireless receivers. The technique simplifies the analog front-end by eliminating the anti-aliasing filters and relaxing the requirements for IF bandpass filters and A/D converters. Like MIMO systems, multiple antenna elements are used for increased frequency reuse. The suppression of both image signal and Co-Channel Interference (CCI) are performed in DSP simultaneously. The signal-processing algorithm used is Independent Component Analysis (ICA). Specifically, the fixed-point Fast-ICA is adopted in the case of static or slow time varying channel conditions. In highly dynamic environment that is typically encountered in cellular mobile communications, a novel ICA algorithm, OBAI-ICA, is developed, which outperforms Fast-ICA for both linear and abrupt time variations. Several practical implementation issues are also considered, such as the effect of finite arithmetic and the possibility of reducing the number of antennas
    corecore