13 research outputs found

    Discriminately Decreasing Discriminability with Learned Image Filters

    Full text link
    In machine learning and computer vision, input images are often filtered to increase data discriminability. In some situations, however, one may wish to purposely decrease discriminability of one classification task (a "distractor" task), while simultaneously preserving information relevant to another (the task-of-interest): For example, it may be important to mask the identity of persons contained in face images before submitting them to a crowdsourcing site (e.g., Mechanical Turk) when labeling them for certain facial attributes. Another example is inter-dataset generalization: when training on a dataset with a particular covariance structure among multiple attributes, it may be useful to suppress one attribute while preserving another so that a trained classifier does not learn spurious correlations between attributes. In this paper we present an algorithm that finds optimal filters to give high discriminability to one task while simultaneously giving low discriminability to a distractor task. We present results showing the effectiveness of the proposed technique on both simulated data and natural face images

    Models for automatic learner engagement estimation

    Get PDF
    Automatic estimation of student engagement can help computer-based learning systems adapt to individual learners. Linear models trained on Gabor features established cutting-edge yet sub-human accuracy on this task, while Convolutional neural networks (CNNs) heavily overfit to the dataset\u27s few subjects. We found that transfer learning enabled linear ridge regression to leverage CNN features learned for image recognition and face re-identification tasks. Our best model achieved a four-fold cross-validated correlation of r=0.581, significantly outperforming the state-of-the-art r=0.522. Our information strength metric correlated with model accuracy (FaceNet, r=0.755; ImageNet, r=0.077), inviting further study of feature utility prediction

    CVPR報告

    Get PDF
    電子情報通信学会パターン認識・メディア理解研究会,2012/6/29 発表スライ

    Learning to Identify While Failing to Discriminate

    Get PDF
    Privacy and fairness are critical in computer vision applications, in particular when dealing with human identification. Achieving a universally secure, private, and fair systems is practically impossible as the exploitation of additional data can reveal private information in the original one. Faced with this challenge, we propose a new line of research, where the privacy is learned and used in a closed environment. The goal is to ensure that a given entity, trusted to infer certain information with our data, is blocked from inferring protected information from it. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task (face verification) is harder than the negative one (gender classification). The framework opens the door to privacy and fairness in very important closed scenarios, ranging from private data accumulation companies to law-enforcement and hospitals

    Low-resolution facial expression recognition: A filter learning perspective

    Get PDF
    Abstract(#br)Automatic facial expression recognition has attracted increasing attention for a variety of applications. However, the problem of low-resolution generally causes the performance degradation of facial expression recognition methods under real-life environments. In this paper, we propose to perform low-resolution facial expression recognition from the filter learning perspective. More specifically, a novel image filter based subspace learning (IFSL) method is developed to derive an effective facial image representation. The proposed IFSL method mainly includes three steps: Firstly, we embed the image filter learning into the optimization process of linear discriminant analysis (LDA). By optimizing the cost function of LDA, a set of discriminative image filters (DIFs) corresponding to different facial expressions is learned. Secondly, the images filtered by the learned DIFs are added together to generate the combined images. Finally, a regression learning technique is leveraged for subspace learning, where an expression-aware transformation matrix is obtained using the combined images. Based on the transformation matrix, IFSL effectively removes irrelevant information while preserving useful information in the facial images. Experimental results on several facial expression datasets, including CK+, MMI, JAFFE, SFEW and RAF-DB, show the superior performance of the proposed IFSL method for low-resolution facial expression recognition, compared with several state-of-the-art methods

    Censored and Fair Universal Representations using Generative Adversarial Models

    Full text link
    We present a data-driven framework for learning \textit{censored and fair universal representations} (CFUR) that ensure statistical fairness guarantees for all downstream learning tasks that may not be known \textit{a priori}. Our framework leverages recent advancements in adversarial learning to allow a data holder to learn censored and fair representations that decouple a set of sensitive attributes from the rest of the dataset. The resulting problem of finding the optimal randomizing mechanism with specific fairness/censoring guarantees is formulated as a constrained minimax game between an encoder and an adversary where the constraint ensures a measure of usefulness (utility) of the representation. We show that for appropriately chosen adversarial loss functions, our framework enables defining demographic parity for fair representations and also clarifies {the optimal adversarial strategy against strong information-theoretic adversaries}. We evaluate the performance of our proposed framework on multi-dimensional Gaussian mixture models and publicly datasets including the UCI Census, GENKI, Human Activity Recognition (HAR), and the UTKFace. Our experimental results show that multiple sensitive features can be effectively censored while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, our results also make precise the tradeoff between censoring and fidelity for the representation as well as the fairness-utility tradeoffs for downstream tasks.Comment: 45 pages, 23 Figures. arXiv admin note: text overlap with arXiv:1807.0530

    Data-Driven and Game-Theoretic Approaches for Privacy

    Get PDF
    abstract: In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data. Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers. Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    EFFECTS OF RESTING STATE ON PERCEPTUAL LEARNING

    Get PDF
    Psychophysical experiments in humans have demonstrated that improvements in perceptual learning tasks occur following daytime rests. The neural correlates of how rest influences subsequent sensory processing during these tasks remain unclear. One possible neural mechanism that may underlie this behavioral improvement is reactivation. Previously evoked network activity reoccurs – reactivates - in the absence of further stimulation. Reactivation was initially discovered in the hippocampus but has now been found in several brain areas including cortex. This phenomenon has been implicated as a general mechanism by which neural networks learn and store sensory information. However, whether reactivation occurs in areas relevant for perceptual learning is unknown. To investigate how sleep affects perceptual learning at the level of single neurons and networks, an experimental paradigm was designed to simultaneously perform extracellular recordings in visual cortical area V4 along with sleep classification in monkeys. V4 is a midlevel visual area that responds to shapes, textures, and colors. Additionally, V4 is important for perceptual learning and shows significant attentional effects. In this experiment, two monkeys were trained to perform a delayed match-to-sample task before and after a 20 minute rest in a dark, quiet room. Whether monkeys exhibit the same improvements in perceptual learning previously shown in humans is unknown. Here, monkeys did improve task performance following the 20 minute rest. Additionally, whether neural networks in V4 could reactivate was explored in a passive fixation task. A reactivation of previously evoked sequential activity was observed in V4 networks following stimulus exposure in the absence of visual stimulation. This reactivation was time-locked to when the stimulus was expected to occur after a cue, which indicated to monkeys the trial was starting. Finally, whether the delayed match-to-sample task-evoked activity was spontaneously reactivated during the 20 minute rest period was tested. No evidence to suggest that reactivation occurs during this time was observed. Considering previous reactivation results, this suggests the cue is necessary to initiate the reactivation. In summary, this work represents an investigation of the neural correlates that underlie behavioral performance improvements following daytime rest. Results can provide a better understanding of how daytime naps improve perceptual learning
    corecore