52 research outputs found

    Solving Computer Vision Challenges with Synthetic Data

    Get PDF
    Computer vision researchers spent a lot of time creating large datasets, yet there is still much information that is difficult to label. Detailed annotations like part segmentation and dense keypoint are expensive to annotate. 3D information requires extra hardware to capture. Besides the labeling cost, an image dataset also lacks the ability to allow an intelligent agent to interact with the world. As a human, we learn through interaction, rather than per-pixel labeled images. To fill in the gap of existing datasets, we propose to build virtual worlds using computer graphics and use generated synthetic data to solve these challenges. In this dissertation, I demonstrate cases where computer vision challenges can be solved with synthetic data. The first part describes our engineering effort about building a simulation pipeline. The second and third part describes using synthetic data to train better models and diagnose trained models. The major challenge for using synthetic data is the domain gap between real and synthetic. In the model training part, I present two cases, which have different characteristics in terms of domain gap. Two domain adaptation methods are proposed, respectively. Synthetic data saves enormous labeling effort by providing detailed ground truth. In the model diagnosis part, I present how to control nuisance factors to analyze model robustness. Finally, I summarize future research directions that can benefit from synthetic data

    Identifying Model Weakness with Adversarial Examiner

    Full text link
    Machine learning models are usually evaluated according to the average case performance on the test set. However, this is not always ideal, because in some sensitive domains (e.g. autonomous driving), it is the worst case performance that matters more. In this paper, we are interested in systematic exploration of the input data space to identify the weakness of the model to be evaluated. We propose to use an adversarial examiner in the testing stage. Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model's performance. This sequence of test data not only helps us understand the current model, but also serves as constructive feedback to help improve the model in the next iteration. We conduct experiments on ShapeNet object classification. We show that our adversarial examiner can successfully put more emphasis on the weakness of the model, preventing performance estimates from being overly optimistic.Comment: To appear in AAAI-2
    • …
    corecore