12 research outputs found

    Symmetric Equilibrium Learning of VAEs

    Full text link
    We view variational autoencoders (VAE) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa. The standard learning approach for VAEs is the maximisation of the evidence lower bound (ELBO). It is asymmetric in that it aims at learning a latent variable model while using the encoder as an auxiliary means only. Moreover, it requires a closed form a-priori latent distribution. This limits its applicability in more complex scenarios, such as general semi-supervised learning and employing complex generative models as priors. We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling. The flexibility and simplicity of this approach allows its application to a wide range of learning scenarios and downstream tasks.Comment: 13 pages, 6 figures, accepted for AISTATS 202

    A probabilistic segmentation scheme

    Get PDF
    Abstract. We propose a probabilistic segmentation scheme, which is widely applicable to some extend. Besides the segmentation itself our model incorporates object specific shading. Dependent upon application, the latter is interpreted either as a perturbation or as meaningful object characteristic. We discuss the recognition task for segmentation, learning tasks for parameter estimation as well as different formulations of shading estimation tasks

    Modelling composite shapes by Gibbs random fields

    Full text link
    We analyse the potential of Gibbs Random Fields for shape prior modelling. We show that the expressive power of second order GRFs is already sufficient to express spatial relations between shape parts and simple shapes simultane-ously. This allows to model and recognise complex shapes as spatial compositions of simpler parts. 1

    Enhancing Fairness of Visual Attribute Predictors

    Full text link
    The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC melanoma classification challenge show the effectiveness of our proposed fairness losses for bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors. Our code is available at https://github.com/nish03/FVAP.Comment: Camera Ready, ACCV 202

    M-Best-Diverse Labelings for Submodular Energies and Beyond

    Get PDF
    Abstract We consider the problem of finding M best diverse solutions of energy minimization problems for graphical models. Contrary to the sequential method of Batra et al., which greedily finds one solution after another, we infer all M solutions jointly. It was shown recently that such jointly inferred labelings not only have smaller total energy but also qualitatively outperform the sequentially obtained ones. The only obstacle for using this new technique is the complexity of the corresponding inference problem, since it is considerably slower algorithm than the method of Batra et al. In this work we show that the joint inference of M best diverse solutions can be formulated as a submodular energy minimization if the original MAP-inference problem is submodular, hence fast inference techniques can be used. In addition to the theoretical results we provide practical algorithms that outperform the current state-of-the-art and can be used in both submodular and non-submodular case

    Transforming an arbitrary minsum problem into a binary one

    No full text
    ABSTRACT. In this report we show, that an arbitrary MinSum problem (i.e. a MinSum problem with an arbitrary finite set of states) can be adequately transformed into a binary one (i.e. into a MinSum problem with only two states). Consequently all known results for binary MinSum problems can be easily extended to the general case. For instance it gives the possibility to solve exactly submodular MinSum problems with more than two states by using MinCut-MaxFlow based technics. CONTENT
    corecore