14 research outputs found

    Batch Bayesian active learning for feasible region identification by local penalization

    Get PDF
    Identifying all designs satisfying a set of constraints is an important part of the engineering design process. With physics-based simulation codes, evaluating the constraints becomes considerable expensive. Active learning can provide an elegant approach to efficiently characterize the feasible region, i.e., the set of feasible designs. Although active learning strategies have been proposed for this task, most of them are dealing with adding just one sample per iteration as opposed to selecting multiple samples per iteration, also known as batch active learning. While this is efficient with respect to the amount of information gained per iteration, it neglects available computation resources. We propose a batch Bayesian active learning technique for feasible region identification by assuming that the constraint function is Lipschitz continuous. In addition, we extend current state-of-the-art batch methods to also handle feasible region identification. Experiments show better performance of the proposed method than the extended batch methods

    Active Mini-Batch Sampling using Repulsive Point Processes

    Full text link
    The convergence speed of stochastic gradient descent (SGD) can be improved by actively selecting mini-batches. We explore sampling schemes where similar data points are less likely to be selected in the same mini-batch. In particular, we prove that such repulsive sampling schemes lowers the variance of the gradient estimator. This generalizes recent work on using Determinantal Point Processes (DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class of repulsive point processes. We first show that the phenomenon of variance reduction by diversified sampling generalizes in particular to non-stationary point processes. We then show that other point processes may be computationally much more efficient than DPPs. In particular, we propose and investigate Poisson Disk sampling---frequently encountered in the computer graphics community---for this task. We show empirically that our approach improves over standard SGD both in terms of convergence speed as well as final model performance
    corecore