2 research outputs found
Parameters Optimization of Deep Learning Models using Particle Swarm Optimization
Deep learning has been successfully applied in several fields such as machine
translation, manufacturing, and pattern recognition. However, successful
application of deep learning depends upon appropriately setting its parameters
to achieve high quality results. The number of hidden layers and the number of
neurons in each layer of a deep machine learning network are two key
parameters, which have main influence on the performance of the algorithm.
Manual parameter setting and grid search approaches somewhat ease the users
tasks in setting these important parameters. Nonetheless, these two techniques
can be very time consuming. In this paper, we show that the Particle swarm
optimization (PSO) technique holds great potential to optimize parameter
settings and thus saves valuable computational resources during the tuning
process of deep learning models. Specifically, we use a dataset collected from
a Wi-Fi campus network to train deep learning models to predict the number of
occupants and their locations. Our preliminary experiments indicate that PSO
provides an efficient approach for tuning the optimal number of hidden layers
and the number of neurons in each layer of the deep learning algorithm when
compared to the grid search method. Our experiments illustrate that the
exploration process of the landscape of configurations to find the optimal
parameters is decreased by 77%-85%. In fact, the PSO yields even better
accuracy results