14,662 research outputs found

    A comparison of different Bayesian design criteria to compute efficient conjoint choice experiments.

    Get PDF
    Bayesian design theory applied to nonlinear models is a promising route to cope with the problem of design dependence on the unknown parameters. The traditional Bayesian design criterion which is often used in the literature is derived from the second derivatives of the loglikelihood function. However, other design criteria are possible. Examples are design criteria based on the second derivative of the log posterior density, the expected posterior covariance matrix, or on the amount of information provided by the experiment. Not much is known in general about how well these criteria perform in constructing efficient designs and which criterion yields robust designs that are efficient for various parameter values. In this study, we apply these Bayesian design criteria to conjoint choice experimental designs and investigate how robust the resulting Bayesian optimal designs are with respect to other design criteria for which they were not optimized. We also examine the sensitivity of each design criterion to the prior distribution. Finally, we try to find out which design criterion is most appealing in a non-Bayesian framework where it is accepted that prior information must be used for design but should not be used in the analysis, and which one is most appealing in a Bayesian framework when the prior distribution is taken into account both for design and for analysis.Bayesian design criterion; Posterior density; Expected posterior covariance matrix; Conjoint choice design; Laplace approximation; Fisher information;

    One-class classifiers based on entropic spanning graphs

    Get PDF
    One-class classifiers offer valuable tools to assess the presence of outliers in data. In this paper, we propose a design methodology for one-class classifiers based on entropic spanning graphs. Our approach takes into account the possibility to process also non-numeric data by means of an embedding procedure. The spanning graph is learned on the embedded input data and the outcoming partition of vertices defines the classifier. The final partition is derived by exploiting a criterion based on mutual information minimization. Here, we compute the mutual information by using a convenient formulation provided in terms of the α\alpha-Jensen difference. Once training is completed, in order to associate a confidence level with the classifier decision, a graph-based fuzzy model is constructed. The fuzzification process is based only on topological information of the vertices of the entropic spanning graph. As such, the proposed one-class classifier is suitable also for data characterized by complex geometric structures. We provide experiments on well-known benchmarks containing both feature vectors and labeled graphs. In addition, we apply the method to the protein solubility recognition problem by considering several representations for the input samples. Experimental results demonstrate the effectiveness and versatility of the proposed method with respect to other state-of-the-art approaches.Comment: Extended and revised version of the paper "One-Class Classification Through Mutual Information Minimization" presented at the 2016 IEEE IJCNN, Vancouver, Canad

    Near-Optimal Noisy Group Testing via Separate Decoding of Items

    Get PDF
    The group testing problem consists of determining a small set of defective items from a larger set of items based on a number of tests, and is relevant in applications such as medical testing, communication protocols, pattern matching, and more. In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the special cases of no noise and symmetric noise, we find that the asymptotic number of tests required for vanishing error probability is within a factor log20.7\log 2 \approx 0.7 of the information-theoretic optimum at low sparsity levels, and that with a small fraction of allowed incorrectly decoded items, this guarantee extends to all sublinear sparsity levels. In addition, we provide a converse bound showing that if one tries to move slightly beyond our low-sparsity achievability threshold using separate decoding of items and i.i.d. randomized testing, the average number of items decoded incorrectly approaches that of a trivial decoder.Comment: Submitted to IEEE Journal of Selected Topics in Signal Processin

    Coding theorems for turbo code ensembles

    Get PDF
    This paper is devoted to a Shannon-theoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are "good" in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number γ0 such that for any binary-input memoryless channel whose Bhattacharyya noise parameter is less than γ0, the average maximum-likelihood (ML) decoder block error probability approaches zero, at least as fast as n -β, where β is the "interleaver gain" exponent defined by Benedetto et al. in 1996
    corecore