39 research outputs found
Using statistical tests for improving state-of-the-art heuristics for the probabilistic traveling salesman problem with deadlines
The Probabilistic Traveling Salesman Problem with Deadlines (PTSPD) is a Stochastic Vehicle Routing Problem with a computationally demanding objective function. Currently heuristics using an approximation of the objective function based on Monte Carlo Sampling are the state-of-the-art methods for the PTSPD. We show that those heuristics can be significantly improved by using statistical tests in combination with the sampling-based evaluation of solutions for the pairwise comparison of solutions
An enhanced ant colony system for the probabilistic traveling salesman problem
In this work we present an Enhanced Ant Colony System algorithm for the Probabilistic Traveling Salesman Problem. More in detail, we identify drawbacks of the well-known Ant Colony System metaheuristic when applied to the Probabilistic Traveling Salesman Problem. We then propose enhancements to overcome those drawbacks. Comprehensive computational studies on common benchmark instances reveal the efficiency of this novel approach. The Enhanced Ant Colony System algorithm clearly outperforms the original Ant Colony System metaheuristic. Additionally, improvements over best-known results for the Probabilistic Traveling Salesman Problem could be obtained for many instances
Effect of aluminum doping on the structural, morphological, electrical and optical properties of ZnO thin films prepared by sol-gel dip coating
689-693Aluminum doped zinc oxide (AZO) thin films with 0-5 at.% aluminum content have been prepared by sol-gel dip coating technique. The thickness of the films has been measured using alpha step method. The structural and morphological properties have been studied, respectively, using X-ray diffraction (XRD) and scanning electron microscopy (SEM). Higher intensity zinc oxide (ZnO) peak (002) has been observed in 1 at.% aluminum doped film with 450 °C of annealing temperature. The grains are more densely packed in the films doped in 1 at.% aluminum content. The grains have tendency to decrease in size as the aluminum content increases. Electrical resistivity measurement technique reveals that electrical resistance decreases with increase of film thickness. The lowest resistivity of the AZO thin film is 3.2×10-2 Ω. Optical properties of AZO thin films are tested by UV-visible spectroscopy, while comparing with all the films 0.5 at.% aluminum doped film produces more than 90% of transmittance
The unreasonable effectiveness of early discarding after one epoch in neural network hyperparameter optimization
To reach high performance with deep learning, hyperparameter optimization (HPO) is essential. This process is usually time-consuming due to costly evaluations of neural networks. Early discarding techniques limit the resources granted to unpromising candidates by observing the empirical learning curves and canceling neural network training as soon as the lack of competitiveness of a candidate becomes evident. Despite two decades of research, little is understood about the trade-off between the aggressiveness of discarding and the loss of predictive performance. Our paper studies this trade-off for several commonly used discarding techniques such as successive halving and learning curve extrapolation. Our surprising finding is that these commonly used techniques offer minimal to no added value compared to the simple strategy of discarding after a constant number of epochs of training. The chosen number of epochs mostly depends on the available compute budget. We call this approach i-Epoch (i being the constant number of epochs with which neural networks are trained) and suggest to assess the quality of early discarding techniques by comparing how their Pareto-Front (in consumed training epochs and predictive performance) complement the Pareto-Front of i-Epoch. © 2024 The Author(s
Exploiting historical data: Pruning autotuning spaces and estimating the number of tuning steps
International audienceAutotuning, the practice of automatic tuning of applications to provide performance portability, has received increased attention in the research community, especially in high performance computing. Ensuring high performance on a variety of hardware usually means modifications to the code, often via different values of a selected set of parameters, such as tiling size, loop unrolling factor or data layout. However, the search space of all possible combinations of these parameters can be large, which can result in cases where the benefits of autotuning are outweighed by its cost, especially with dynamic tuning. Therefore, estimating the tuning time in advance or shortening the tuning time is very important in dynamic tuning applications. We have found that certain properties of tuning spaces do not vary much when hardware is changed. In this paper, we demonstrate that it is possible to use historical data to reliably predict the number of tuning steps that is necessary to find a wellperforming configuration, and to reduce the size of the tuning space. We evaluate our hypotheses on a number of HPC benchmarks written in CUDA and OpenCL, using several different generations of GPUs and CPUs