42 research outputs found

    A Bayes risk minimization machine for example-dependent cost classification

    Get PDF
    A new method for example-dependent cost (EDC) classification is proposed. The method constitutes an extension of a recently introduced training algorithm for neural networks. The surrogate cost function is an estimate of the Bayesian risk, where the estimates of the conditional probabilities for each class are defined in terms of a 1-D Parzen window estimator of the output of (discriminative) neural networks. This probability density is modeled with the objective of allowing an easy minimization of a sampled version of the Bayes risk. The conditional probabilities included in the definition of the risk are not explicitly estimated, but the risk is minimized by a gradient-descent algorithm. The proposed method has been evaluated using linear classifiers and neural networks, with both shallow (a single hidden layer) and deep (multiple hidden layers) architectures. The experimental results show the potential and flexibility of the proposed method, which can handle EDC classification under imbalanced data situations that commonly appear in this kind of problems.This work has been partly supported by grants CASI-CAM-CM (S2013/ICE-2845, Madrid C/ FEDER, EUSF) and MacroADOBE (TEC2015-67719-P, MINECO/FEDER, UE)

    A new boosting design of Support Vector Machine classifiers

    Get PDF
    Boosting algorithms pay attention to the particular structure of the training data when learning, by means of iteratively emphasizing the importance of the training samples according to their difficulty for being correctly classified. If common kernel Support Vector Machines (SVMs) are used as basic learners to construct a Real AdaBoost ensemble, the resulting ensemble can be easily compacted into a monolithic architecture by simply combining the weights that correspond to the same kernels when they appear in different learners, avoiding to increase the operation computational effort for the above potential advantage. This way, the performance advantage that boosting provides can be obtained for monolithic SVMs, i.e., without paying in classification computational effort because many learners are needed. However, SVMs are both stable and strong, and their use for boosting requires to unstabilize and to weaken them. Yet previous attempts in this direction show a moderate success. In this paper, we propose a combination of a new and appropriately designed subsampling process and an SVM algorithm which permits sparsity control to solve the difficulties in boosting SVMs for obtaining improved performance designs. Experimental results support the effectiveness of the approach, not only in performance, but also in compactness of the resulting classifiers, as well as that combining both design ideas is needed to arrive to these advantageous designs.This work was supported in part by the Spanish MICINN under Grants TEC 2011-22480 and TIN 2011-24533

    Plant identification via adaptive combination of transversal filters

    Get PDF
    For least mean-square (LMS) algorithm applications, it is important to improve the speed of convergence vs the residual error trade-off imposed by the selection of a certain value for the step size. In this paper, we propose to use a mixture approach, adaptively combining two independent LMS filters with large and small step sizes to obtain fast convergence with low misadjustment during stationary periods. Some plant identification simulation examples show the effectiveness of our method when compared to previous variable step size approaches. This combination approach can be straightforwardly extended to other kinds of filters, as it is illustrated with a convex combination of recursive least-squares (RLS) filters.Publicad

    Some new results in sampling deterministic signals

    Get PDF
    Whittaker's (or Shannon 's) Sampling Theorem is a well-known interpolation formula that has been extended in many directions. In this paper, we introduce two new formulations: -The first follows Papoulis' Generalized Sampling Expansion for reconstructing a signal from regular samples of N(linear, time-invariant) functionals of it, taking the samples at 1/N the Nyquist rate; but generalizing it for including linear T- periodically time-varying systems. This way is in close relation with works that extend sampling in other directions. -The second generalizes Linden's proof of Kohlenberg's sampling for a bandpass signal, in order to maintain the minimum sampling rate (in the average) and to obtain a separate interpolation of the in-phase and quadrature components of the signal. This follows Grace- Pitt-Brown's theory of bandpass sampling.Peer ReviewedPostprint (published version

    Improving deep learning performance with missing values via deletion and compensation

    Get PDF
    Proceedings of: International Work conference on the Interplay between Natural and Artificial Computation (IWINAC 2015)Missing values in a dataset is one of the most common difficulties in real applications. Many different techniques based on machine learning have been proposed in the literature to face this problem. In this work, the great representation capability of the stacked denoising auto-encoders is used to obtain a new method of imputating missing values based on two ideas: deletion and compensation. This method improves imputation performance by artificially deleting values in the input features and using them as targets in the training process. Nevertheless, although the deletion of samples is demonstrated to be really efficient, it may cause an imbalance between the distributions of the training and the test sets. In order to solve this issue, a compensation mechanism is proposed based on a slight modification of the error function to be optimized. Experiments over several datasets show that the deletion and compensation not only involve improvements in imputation but also in classification in comparison with other classical techniques.The work of A. R. Figueiras-Vidal has been partly supported by Grant Macro-ADOBE (TEC 2015-67719-P, MINECO/FEDER&FSE). The work of J.L. Sancho-Gómez has been partly supported by Grant AES 2017 (PI17/00771, MINECO/FEDER)

    Sample selection via clustering to construct support vector-like classifiers

    Get PDF
    This paper explores the possibility of constructing RBF classifiers which, somewhat like support vector machines, use a reduced number of samples as centroids, by means of selecting samples in a direct way. Because sample selection is viewed as a hard computational problem, this selection is done after a previous vector quantization: this way obtaining also other similar machines using centroids selected from those that are learned in a supervised manner. Several forms of designing these machines are considered, in particular with respect to sample selection; as well as some different criteria to train them. Simulation results for well-known classification problems show very good performance of the corresponding designs, improving that of support vector machines and reducing substantially their number of units. This shows that our interest in selecting samples (or centroids) in an efficient manner is justified. Many new research avenues appear from these experiments and discussions, as suggested in our conclusions.Publicad

    Support vector method for robust ARMA system identification

    Get PDF
    This paper presents a new approach to auto-regressive and moving average (ARMA) modeling based on the support vector method (SVM) for identification applications. A statistical analysis of the characteristics of the proposed method is carried out. An analytical relationship between residuals andSVM-ARMA coefficients allows the linking of the fundamentals of SVM with several classical system identification methods. Additionally, the effect of outliers can be cancelled. Application examples show the performance of SVM-ARMA algorithm when it is compared with other system identification methods.Publicad

    Sparse deconvolution using support vector machines

    Get PDF
    Sparse deconvolution is a classical subject in digital signal processing, having many practical applications. Support vector machine (SVM) algorithms show a series of characteristics, such as sparse solutions and implicit regularization, which make them attractive for solving sparse deconvolution problems. Here, a sparse deconvolution algorithm based on the SVM framework for signal processing is presented and analyzed, including comparative evaluations of its performance from the points of view of estimation and detection capabilities, and of robustness with respect to non-Gaussian additive noise.Publicad

    Aplicación del muestreo enfatizado a la evaluación de transmisiones digitales

    Get PDF
    The Importance Sampling Technique is a method for reducing the computational effort in Montecarlo simulations for obtaining the relative frequency of an event having a very low probability. While this method is well known in general Operations Research /1//2//3/ and Radar /4//5//6//7//8//9//10/, it has not been considered in digital communication problems. This paper aims to introduce the Importance Sampling concept in this communication context.Peer ReviewedPostprint (published version
    corecore