410 research outputs found

    Blind image separation based on exponentiated transmuted Weibull distribution

    Full text link
    In recent years the processing of blind image separation has been investigated. As a result, a number of feature extraction algorithms for direct application of such image structures have been developed. For example, separation of mixed fingerprints found in any crime scene, in which a mixture of two or more fingerprints may be obtained, for identification, we have to separate them. In this paper, we have proposed a new technique for separating a multiple mixed images based on exponentiated transmuted Weibull distribution. To adaptively estimate the parameters of such score functions, an efficient method based on maximum likelihood and genetic algorithm will be used. We also calculate the accuracy of this proposed distribution and compare the algorithmic performance using the efficient approach with other previous generalized distributions. We find from the numerical results that the proposed distribution has flexibility and an efficient resultComment: 14 pages, 12 figures, 4 tables. International Journal of Computer Science and Information Security (IJCSIS),Vol. 14, No. 3, March 2016 (pp. 423-433

    Non Linear Blind Source Separation Using Different Optimization Techniques

    Get PDF
    The Independent Component Analysis technique has been used in Blind Source separation of non linear mixtures. The project involves the blind source separation of a non linear mixture of signals based on their mutual independence as the evaluation criteria. The linear mixer is modeled by the Fast ICA algorithm while the Non linear mixer is modeled by an odd polynomial function whose parameters are updated by four separate optimization techniques which are Particle Swarm Optimization, Real coded Genetic Algorithm, Binary Genetic Algorithm and Bacterial Foraging Optimization. The separated mixture outputs of each case was studied and the mean square error in each case was compared giving an idea of the effectiveness of each optimization technique

    An efficient optimized independent component analysis method based on genetic algorithm

    Get PDF
    Three simulation experiments are designed to evaluate and compare the performance of three common independent component analysis implementation algorithms – FastICA, JADE, and extended-Infomax. Experiment results show that the above three algorithms can’t separate the mixtures of super-Gaussian and sub-Gaussian precisely, and FastICA fails in recovering weak source signals from mixed signals. In this case an independent component analysis algorithm, which applies genetic algorithm to minimize the difference between joint probability and product of marginal probabilities of separated signals, is proposed. The computation procedure, especially the fitness evaluation when signals are in discrete form, is discussed in detail. The validity of the proposed algorithm is proved by simulation tests. Moreover, the results indicate that the proposed algorithm outperforms the above three common algorithms significantly. Finally the proposed algorithm is applied to separate the mixture of rolling bearing sound signal and electromotor signal, and the results are satisfied

    BMICA-independent component analysis based on B-spline mutual information estimator

    Get PDF
    The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. Its estimation however using B-Spline has not been used before in creating an approach for Independent Component Analysis. In this paper we present a B-Spline estimator for mutual information to find the independent components in mixed signals. Tested using electroencephalography (EEG) signals the resulting BMICA (B-Spline Mutual Information Independent Component Analysis) exhibits better performance than the standard Independent Component Analysis algorithms of FastICA, JADE, SOBI and EFICA in similar simulations. BMICA was found to be also more reliable than the 'renown' FastICA

    Leakage discharge separation in multi-leaks pipe networks based on improved Independent Component Analysis with Reference (ICA-R) algorithm

    Get PDF
    The existing leakage assessment methods are not accurate and timely, making it difficult to meet the needs of water companies. In this paper, a methodology based on Independent Component Analysis with Reference (ICA-R) algorithm was proposed to give an more accurate estimation of leakage discharge in multi-leaks water distribution network without considering the specific individuality of one single leak. The proposed algorithm has been improved is improved to prevent error convergence in multi-leak pipe networks. Then an example EPANET model and a physical experimental platform were built to simulate and evaluate the flow in multi-leak WDNs, and the leakage flow rate is calculated by improved ICA-R algorithm and FastICA algorithm. The simulation results are shown the improved ICA-R algorithm has better performanc

    Blind source separation and feature extraction in concurrent control charts pattern recognition: Novel analyses and a comparison of different methods

    No full text
    International audienceControl charts are among the main tools in statistical process control (SPC) and have been extensively used for monitoring industrial processes. Currently, besides the single control charts, there is an interest in the concurrent ones. These graphics are characterized by the simultaneous presence of two or more single control charts. As a consequence, the individual patterns may be mixed, hindering the identification of a non-random pattern acting in the process; this phenomenon is refered as concurrent charts. In view of this problem, our first goal is to investigate the importance of an efficient separation step for pattern recognition. Then, we compare the efficiency of different Blind Source Separation (BSS) methods in the task of unmixing concurrent control charts. Furthermore, these BSS methods are combined with shape and statistical features in order to verify the performance of each one in pattern classification. In additional, the robustness of the better approach is tested in scenarios where there are different non-randomness levels and in cases with imbalanced dataset provided to the classifier. After simulating different patterns and applying several separation methods, the results have shown that the recognition rate is widely influenced by the separation and feature extraction steps and that the selection of efficient separation methods is fundamental to achieve high classification rates

    A Novel Blind Separation Method in Magnetic Resonance Images

    Get PDF
    A novel global search algorithm based method is proposed to separate MR images blindly in this paper. The key point of the method is the formulation of the new matrix which forms a generalized permutation of the original mixing matrix. Since the lowest entropy is closely associated with the smooth degree of source images, blind image separation can be formulated to an entropy minimization problem by using the property that most of neighbor pixels are smooth. A new dataset can be obtained by multiplying the mixed matrix by the inverse of the new matrix. Thus, the search technique is used to searching for the lowest entropy values of the new data. Accordingly, the separation weight vector associated with the lowest entropy values can be obtained. Compared with the conventional independent component analysis (ICA), the original signals in the proposed algorithm are not required to be independent. Simulation results on MR images are employed to further show the advantages of the proposed method

    A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    Get PDF
    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd=2.7%), 97.6% (sd=2.8%) and 90.8% (sd=5.5%) and average specificities of: 93.6% (sd=4.1%), 99% (sd=2.2%) and 79.4% (sd=9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy
    • 

    corecore