7 research outputs found

    MIME: Minority Inclusion for Majority Group Enhancement of AI Performance

    Full text link
    Several papers have rightly included minority groups in artificial intelligence (AI) training data to improve test inference for minority groups and/or society-at-large. A society-at-large consists of both minority and majority stakeholders. A common misconception is that minority inclusion does not increase performance for majority groups alone. In this paper, we make the surprising finding that including minority samples can improve test error for the majority group. In other words, minority group inclusion leads to majority group enhancements (MIME) in performance. A theoretical existence proof of the MIME effect is presented and found to be consistent with experimental results on six different datasets. Project webpage: https://visual.ee.ucla.edu/mime.htm

    On Hybrid Methods that Blend Computer Vision and Physics

    No full text
    Deep learning has exhibited remarkable performance on various computer vision tasks. However, these models usually suffer from the generalization issue when the training sets are not sufficiently large or diverse. Human intelligence, on the other hand, is capable of learning with a few samples. One of the potential reasons for this is that we use other prior knowledge to generalize to new environments and unseen data, as opposed to learning everything from the provided training sets. We aim to enable machines with such capability. More specifically, we focus on integrating different types of prior physical knowledge and inductive biases into neural networks for various computer vision applications.The core idea is to exploit physical models as inductive biases and design specific strategies to blend them with the neural network learning process. This problem is difficult since we need to consider both the fidelity of our prior knowledge and the quality of the training samples. To validate the effectiveness of the proposed blending strategies, extensive experiments have been conducted on multiple computer vision tasks, such as Shape from Polarization (SfP), remote photoplethysmography (rPPG), and single-image rain removal

    On Hybrid Methods that Blend Computer Vision and Physics

    No full text

    Combining Physics with Machine Learning: Case Study of Shape from Polarization

    No full text
    Shape from Polarization (SfP) recovers an object's shape from polarized photographs of the scene. In previous works, the SfP algorithms use idealized physical equations to recover the shape. These previous approaches are error-prone when real-world conditions deviate from the idealized physics. In this thesis, we propose a physics-based neural network to address the SfP problem. Our algorithm fuses deep learning with synthetic renderings (derived from physics) to exceed the quality of all previous SfP methods. A two-stage encoder is used to resolve the longstanding problem of ambiguities. Our results of surface normal recovery are an improvement upon methods that utilize physics-based solutions alone
    corecore