222 research outputs found
The distribution of the objective function values obtained by DiDo.
Comparison between the objective function values on the initial points, i.e., the training samples used for learning DNN surrogate, and that on final candidates of optimal parameters. The minimum objective function value among training samples used for learning DNN surrogate ā¼ā0.1, whereas the objective function values of the candidates of optimal parameters concentrate around ā0.98 very close to the true minimum ā1 of this problem.</p
Training trajectory of DNN loss functions.
Trajectory of training loss and test loss through training DNN surrogate for fitting objective function of 100-dimensional toy optimization problem. In the end of training, the test loss is significantly larger than the training loss, indicating the DNN training is close to convergence.</p
Illustration of āmean distanceā for stopping criterion.
Illustration of the relation between prediction accuracy and the āmean distanceā. Intuitively, we can use predicted accuracy on perturbed points to quantify the quality of the classifier. (a) For red point on the surrogate boundary, the distance to the true boundary is larger than ϵ, and the prediction accuracy is roughly 50%; (b) for red point on the surrogate boundary, the distance to the true boundary is much smaller than ϵ, and the prediction accuracy is close to 1.</p
The improvement of DNN classifier through adaptive fitting.
(a) Classification accuracy of the DNN classifier on the perturbed terms during iteration. Note that, at each iteration t, we apply an extra constraint to the points sampled by LMC. In the two figures, label accuracy means classification accuracy after perturbation. As we add more data, the magnitude of the perturbed term when classifier accuracy on perturbed term achieve 100% gets smaller, which means the performance of classifier is better. (b) Classification accuracy of the DNN classifier on the fixed standard deviation of the perturbed terms, where variance Ļ2 = 0.1. The classification accuracy is getting better as we update the DNN classifier.</p
Property of candidates of optimal parameters for rotor profile design.
The classifier value and the actual flow predicted by DNN surrogate on these candidates of optimal parameters. The red solid line is corresponding to the probability 0.5. Both blue and yellow dots are feasible predicted by DNN, both above the solid red line. However the yellow points are outside the true boundary. Therefore candidates of optimal parameters are close to boundary of true feasible region, signifying the importance of a highly accurate surrogate of the feasible region as obtained by our DNN-based adaptive fitting approach.</p
The improvement of DNN classifier through adaptive fitting.
(a) Classification accuracy of the DNN classifier on the perturbed terms during iteration. Note that, there are not all iteration results and at each iteration t, we apply an extra constraint to the points sampled by LMC. In the two figures, label accuracy means classification accuracy after perturbation. As we add more data, the magnitude of the perturbed term when classifier accuracy on perturbed term increase from 50% sharply gets smaller, which means the distance between the true boundary and surrogate boundary gets smaller, i.e., the performance of classifier is better; (b) The classifier values on the points uniformly distributed along the radial direction. As the iteration proceeds, the classifier is more closed to the real classification function I(r ⤠1).</p
Training trajectory of DNN loss functions.
Trajectory of training loss and test loss through training DNN surrogate for fitting objective function of optimal rotor profile problem. In the end of training, the test loss is significantly larger than the training loss, indicating the DNN training is close to convergence.</p
- ā¦