916 research outputs found

    A review of domain adaptation without target labels

    Full text link
    Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.Comment: 20 pages, 5 figure

    MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face Recognition

    Full text link
    Although significant progress has been made in face recognition, demographic bias still exists in face recognition systems. For instance, it usually happens that the face recognition performance for a certain demographic group is lower than the others. In this paper, we propose MixFairFace framework to improve the fairness in face recognition models. First of all, we argue that the commonly used attribute-based fairness metric is not appropriate for face recognition. A face recognition system can only be considered fair while every person has a close performance. Hence, we propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches. Different from previous approaches that require sensitive attribute labels such as race and gender for reducing the demographic bias, we aim at addressing the identity bias in face representation, i.e., the performance inconsistency between different identities, without the need for sensitive attribute labels. To this end, we propose MixFair Adapter to determine and reduce the identity bias of training samples. Our extensive experiments demonstrate that our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.Comment: Accepted in AAAI-23; Code: https://github.com/fuenwang/MixFairFac

    Implication of Manifold Assumption in Deep Learning Models for Computer Vision Applications

    Get PDF
    The Deep Neural Networks (DNN) have become the main contributor in the field of machine learning (ML). Specifically in the computer vision (CV), there are applications like image and video classification, object detection and tracking, instance segmentation and visual question answering, image and video generation are some of the applications from many that DNNs have demonstrated magnificent progress. To achieve the best performance, the DNNs usually require a large number of labeled samples, and finding the optimal solution for such complex models with millions of parameters is a challenging task. It is known that, the data are not uniformly distributed on the sample space, rather they are residing on a low-dimensional manifold embedded in the ambient space. In this dissertation, we specifically investigate the effect of manifold assumption on various applications in computer vision. First we propose a novel loss sensitive adversarial learning (LSAL) paradigm in training GAN framework that is built upon the assumption that natural images are lying on a smooth manifold. It benefits from the geodesic of samples in addition to the distance of samples in the ambient space to differentiate between real and generated samples. It is also shown that the discriminator of a GAN model trained based on LSAL paradigm is also successful in semi-supervised classification of images when the number of labeled images are limited. Then we propose a novel Capsule projection Network (CapProNet) that models the manifold of data through the union of subspace capsules in the last layer of a CNN image classifier. The CapProNet idea has been further extended to the general framework of Subspace Capsule Network that not only does model the deformation of objects but also parts of objects through the hierarchy of sub- space capsules layers. We apply the subspace capsule network on the tasks of (semi-) supervised image classification and also high resolution image generation. Finally, we verify the reliability of DNN models by investigating the intrinsic properties of the models around the manifold of data to detect maliciously trained Trojan models
    • …
    corecore