20 research outputs found

    Image Restoration Using Noisy ICA, PCA Compression and Code Shrinkage Technique

    Get PDF
    The research reported in the paper aims the development of some methodologies for noise removal in image restoration. In real life, there is always some kind of noise present in the observed data. Therefore, it has been proposed that the ICA model used in image restoration should include noise term as well. Different methods for estimating the ICA model when noise is present have been developed. In noisy ICA, we have to deal with the problem of estimation of the noise free realization of the independent components. The noisy ICA model can be use to develop a denoising method, namely the sparse code shrinkage [10]. The final part of the paper presents a LMS optimal PCA compression/decompression scheme, where the noise is annihilated in the feature space. In order to derive conclusions concerning the correlations between the dimensionality reduction and the resulted quality of the restored images as well as the effect of using both LMS optimal compression/decompression technique and PCA based noise removal method several tests were performed on the same set of data. The tests proved that the proposed restoration technique yields high quality restored images in both cases, when the CSPCA algorithm was applied directly on the initial image and when it was applied in the reduced feature space respectively.ICA, noisy ICA, feature extraction, PCA, image processing, data restoration, noise removal, shrinkage function

    Supervised and Unsupervised Classification for Pattern Recognition Purposes

    Get PDF
    A cluster analysis task has to identify the grouping trends of data, to decide on the sound clusters as well as to validate somehow the resulted structure. The identification of the grouping tendency existing in a data collection assumes the selection of a framework stated in terms of a mathematical model allowing to express the similarity degree between couples of particular objects, quasi-metrics expressing the similarity between an object an a cluster and between clusters, respectively. In supervised classification, we are provided with a collection of preclassified patterns, and the problem is to label a newly encountered pattern. Typically, the given training patterns are used to learn the descriptions of classes which in turn are used to label a new pattern. The final section of the paper presents a new methodology for supervised learning based on PCA. The classes are represented in the measurement/feature space by a continuous repartitionsclustering, supervised classification, pattern recognition, dissimilarity measures, PCA (principal component analysis).

    The Use of Features Extracted from Noisy Samples for Image Restoration Purposes

    Get PDF
    An important feature of neural networks is the ability they have to learn from their environment, and, through learning to improve performance in some sense. In the following we restrict the development to the problem of feature extracting unsupervised neural networks derived on the base of the biologically motivated Hebbian self-organizing principle which is conjectured to govern the natural neural assemblies and the classical principal component analysis (PCA) method used by statisticians for almost a century for multivariate data analysis and feature extraction. The research work reported in the paper aims to propose a new image reconstruction method based on the features extracted from the noise given by the principal components of the noise covariance matrix.feature extraction, PCA, Generalized Hebbian Algorithm, image restoration, wavelet transform, multiresolution support set

    A Survey on Potential of the Support Vector Machines in Solving Classification and Regression Problems

    Get PDF
    Kernel methods and support vector machines have become the most popular learning from examples paradigms. Several areas of application research make use of SVM approaches as for instance hand written character recognition, text categorization, face detection, pharmaceutical data analysis and drug design. Also, adapted SVM’s have been proposed for time series forecasting and in computational neuroscience as a tool for detection of symmetry when eye movement is connected with attention and visual perception. The aim of the paper is to investigate the potential of SVM’s in solving classification and regression tasks as well as to analyze the computational complexity corresponding to different methodologies aiming to solve a series of afferent arising sub-problems.Support Vector Machines, Kernel-Based Methods, Supervised Learning, Regression, Classification

    Mathematics behind a Class of Image Restoration Algorithms

    Get PDF
    The restoration techniques are usually oriented toward modeling the type of degradation in order to infer the inverse process for recovering the given image. This approach usually involves the option for a criterion to numerically evaluate the quality of the resulted image and consequently the restoration process can be expressed in terms of an optimization problem. Most of the approaches are essentially based on additional hypothesis concerning the statistical properties of images. However, in real life applications, there is no enough information to support a certain particular image model, and consequently model-free developments have to be used instead. In our approaches the problem of image denoising/restoration is viewed as an information transmission/processing system, where the signal representing a certain clean image is transmitted through a noisy channel and only a noise-corrupted version is available. The aim is to recover the available signal as much as possible by using different noise removal techniques that is to build an accurate approximation of the initial image. Unfortunately, a series of image qualities, as for instance clarity, brightness, contrast, are affected by the noise removal techniques and consequently there is a need to partially restore them on the basis of information extracted exclusively from data. Following a brief description of the image restoration framework provided in the introductory part, a PCA-based methodology is presented in the second section of the paper. The basics of a new informational-based development for image restoration purposes and scatter matrix-based methods are given in the next two sections. The final section contains concluding remarks and suggestions for further work

    Partially Supervised Approach in Signal Recognition

    Get PDF
    The paper focuses on the potential of principal directions based approaches in signal classification and recognition. In probabilistic models, the classes are represented in terms of multivariate density functions, and an object coming from a certain class is modeled as a random vector whose repartition has the density function corresponding to this class. In cases when there is no statistical information concerning the set of density functions corresponding to the classes involved in the recognition process, usually estimates based on the information extracted from available data are used instead. In the proposed methodology, the characteristics of a class are given by a set of eigen vectors of the sample covariance matrix. The overall dissimilarity of an object X with a given class C is computed as the disturbance of the structure of C, when X is allotted to C. A series of tests concerning the behavior of the proposed recognition algorithm are reported in the final section of the paper.signal processing, classification, pattern recognition, compression/decompression

    Denoising Techniques Based on the Multiresolution Representation

    Get PDF
    So far, considerable research efforts have been invested in the are of using statistical methods for image processing purposes yielding to a significant amount of models that aim to improve as much as possible the still existing and currently used processing techniques, some of them being based on using wavelet representation of images. Among them the simplest and the most attractive one use the Gaussian assumption about the distribution of the wavelet coefficients. This model has been successfully used in image denoising and restoration. The limitation comes from the fact that only the first-order statistics of wavelet coefficients are taking into account and the higher-order ones are ignored. The dependencies between wavelet coefficients can be formulated explicitly, or implicitly. The multiresolution representation is used to develop a class of algorithms for noise removal in case of normal models. The multiresolution algorithms perform the restoration tasks by combining, at each resolution level, according to a certain rule, the pixels of a binary support image. The values of the support image pixels are either 1 or 0 depending on their significance degree. At each resolution level, the contiguous areas of the support image corresponding to 1-value pixels are taken as possible objects of the image. Our work reports two attempts in using the multiresolution based algorithms for restoration purposes in case of normally distributed noise. Several results obtained using our new restoration algorithm are presented in the final sections of the paper.multiresolution support, wavelet transform, filtering techniques, statistically significant wavelet coefficients

    Partially Supervised Approach in Signal Recognition

    Get PDF
    The paper focuses on the potential of principal directions based approaches in signal classification and recognition. In probabilistic models, the classes are represented in terms of multivariate density functions, and an object coming from a certain class is modeled as a random vector whose repartition has the density function corresponding to this class. In cases when there is no statistical information concerning the set of density functions corresponding to the classes involved in the recognition process, usually estimates based on the information extracted from available data are used instead. In the proposed methodology, the characteristics of a class are given by a set of eigen vectors of the sample covariance matrix. The overall dissimilarity of an object X with a given class C is computed as the disturbance of the structure of C, when X is allotted to C. A series of tests concerning the behavior of the proposed recognition algorithm are reported in the final section of the paper

    Extensions of the SVM Method to the Non-Linearly Separable Data

    Get PDF
    The main aim of the paper is to briefly investigate the most significant topics of the currently used methodologies of solving and implementing SVM-based classifier. Following a brief introductory part, the basics of linear SVM and non-linear SVM models are briefly exposed in the next two sections. The problem of soft margin SVM is exposed in the fourth section of the paper. The currently used methods for solving the resulted QP-problem require access to all labeled samples at once and a computation of an optimal solution is of complexity O(N2). Several ap-proaches have been proposed aiming to reduce the computation complexity, as the interior point (IP) methods, and the decomposition methods such as Sequential Minimal Optimization – SMO, as well as gradient-based methods to solving primal SVM problem. Several approaches based on genetic search in solving the more general problem of identifying the optimal type of kernel from pre-specified set of kernel types (linear, polynomial, RBF, Gaussian, Fourier, Bspline, Spline, Sigmoid) have been recently proposed. The fifth section of the paper is a brief survey on the most outstanding new techniques reported so far in this respect

    New Approaches of NARX-Based Forecasting Model. A Case Study on CHF-RON Exchange Rate

    Get PDF
    The work reported in the paper focuses on the prediction of the exchange rate of the Swiss Franc-Romanian Leu against the US Dollar-Romanian Leu using the NARX model. We propose two new forecasting methods based on NARX model by considering both additional testing and network retraining in order to improve the generalization capacities of the trained neural network. The forecasting accuracy of the two methods is evaluated in terms of one of the most popular quality measure, namely weighted RMSE error. The comparative analysis together with experimental results and conclusive remarks are reported in the final part of the paper. The performances of the proposed methodologies are evaluated by a long series of tests, the results being very encouraging as compared to similar developments. Based on the conducted experiments, we conclude that both resulted algorithms perform better than the classical one. Moreover, the retraining method in which the network is conserved over time outperforms the one in which only additional testing is used
    corecore