6 research outputs found

    Decentralized detection for censored binary observations with statistical dependence

    Get PDF
    This paper analyzes the problem of distributed detection in a sensor network of binary sensors. In particular, statistical dependence between local decisions (at binary sensors) is assumed, and two complementary methods to save energy have been considered: censoring, to avoid some transmissions from sensors to fusion center, and a sleep and wake up random schedule at local sensors. The effect of possible failures in transmission has been also included, considering the probability of having a successful transmission from a sensor to the fusion center. In this scenario, the necessary statistical information has been identified, the optimal decision rule at the fusion center has been obtained, and some examples have been used to analyze the effect of statistical dependence in a simple network with two sensors

    Neural network for ordinal classification of imbalanced data by minimizing a Bayesian cost

    Get PDF
    Ordinal classification of imbalanced data is a challenging problem that appears in many real world applications. The challenge is to simultaneously consider the order of the classes and the class imbalance, which can notably improve the performance metrics. The Bayesian formulation allows to deal with these two characteristics jointly: It takes into account the prior probability of each class and the decision costs, which can be used to include the imbalance and the ordinal information, respectively. We propose to use the Bayesian formulation to train neural networks, which have shown excellent results in many classification tasks. A loss function is proposed to train networks with a single neuron in the output layer and a threshold based decision rule. The loss is an estimate of the Bayesian classification cost, based on the Parzen windows estimator, which is fitted for a thresholded decision. Experiments with several real datasets show that the proposed method provides competitive results in different scenarios, due to its high flexibility to specify the relative importance of the errors in the classification of patterns of different classes, considering the order and independently of the probability of each class.This work was partially supported by Spanish Ministry of Science and Innovation through Thematic Network "MAPAS"(TIN2017-90567-REDT) and by BBVA Foundation through "2-BARBAS" research grant. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    A Bayes risk minimization machine for example-dependent cost classification

    Get PDF
    A new method for example-dependent cost (EDC) classification is proposed. The method constitutes an extension of a recently introduced training algorithm for neural networks. The surrogate cost function is an estimate of the Bayesian risk, where the estimates of the conditional probabilities for each class are defined in terms of a 1-D Parzen window estimator of the output of (discriminative) neural networks. This probability density is modeled with the objective of allowing an easy minimization of a sampled version of the Bayes risk. The conditional probabilities included in the definition of the risk are not explicitly estimated, but the risk is minimized by a gradient-descent algorithm. The proposed method has been evaluated using linear classifiers and neural networks, with both shallow (a single hidden layer) and deep (multiple hidden layers) architectures. The experimental results show the potential and flexibility of the proposed method, which can handle EDC classification under imbalanced data situations that commonly appear in this kind of problems.This work has been partly supported by grants CASI-CAM-CM (S2013/ICE-2845, Madrid C/ FEDER, EUSF) and MacroADOBE (TEC2015-67719-P, MINECO/FEDER, UE)

    Modeling nonlinear power amplifiers in OFDM systems from subsampled data: a comparative study using real measurements

    Get PDF
    A comparative study among several nonlinear high-power amplifier (HPA) models using real measurements is carried out. The analysis is focused on specific models for wideband OFDM signals, which are known to be very sensitive to nonlinear distortion. Moreover, unlike conventional techniques, which typically use a single-tone test signal and power measurements, in this study the models are fitted using subsampled time-domain data. The in-band and out-of-band (spectral regrowth) performances of the following models are evaluated and compared: Saleh’s model, envelope polynomial model (EPM), Volterra model, the multilayer perceptron (MLP) model, and the smoothed piecewise-linear (SPWL) model. The study shows that the SPWL model provides the best in-band characterization of the HPA. On the other hand, the Volterra model provides a good trade-off between model complexity (number of parameters) and performance

    Training neural network classifiers through Bayes risk minimization applying unidimensional Parzen windows

    Get PDF
    A new training algorithm for neural networks in binary classification problems is presented. It is based on the minimization of an estimate of the Bayes risk by using Parzen windows applied to the final one-dimensional nonlinear transformation of the samples to estimate the probability of classification error. This leads to a very general approach to error minimization and training, where the risk that is to be minimized is defined in terms of integrated one-dimensional Parzen windows, and the gradient descent algorithm used to minimize this risk is a function of the window that is used. By relaxing the constraints that are typically applied to Parzen windows when used for probability density function estimation, for example by allowing them to be non-symmetric or possibly infinite in duration, an entirely new set of training algorithms emerge. In particular, different Parzen windows lead to different cost functions, and some interesting relationships with classical training methods are discovered. Experiments with synthetic and real benchmark datasets show that with the appropriate choice of window, fitted to the specific problem, it is possible to improve the performance of neural network classifiers over those that are trained using classical methods. (C) 2017 Elsevier Ltd. All rights reserved.This work was partly supported by Grant TEC-2015-67719-P “Macro-ADOBE” (Spain MINECO/EU FSE, FEDER), and network TIN 2015-70808-REDT, “DAMA” (MINECO) (M. Lázaro and A.R. Figueiras-Vidal), and by Prof. Monson Hayes’ Banco de Santander-UC3M Chair of Excellence, 2015

    Una nueva técnica para la interpolación simultánea de una función y sus derivadas

    No full text
    En esta comunicación se presenta una nueva técnica para la interpolación simultánea de una función y sus derivadas. En una primera etapa el método realiza la interpolación discreta de las secuencias de la función y sus derivadas empleando filtros específicos. En una segunda etapa se realiza la reconstrucción analógica de la función. El resultado final es equivalente a una interpolación con “splines” en la que se han introducido puntos de ruptura adicionales. El método es local y computacionalmente muy eficiente
    corecore