33,499 research outputs found

    PENGEMBANGAN R PACKAGE gradDescent 3.0 UNTUK IMPLEMENTASI METODE BERBASIS GRADIENT DESCENT: Studi kasus: Faktor kompresibilitas gas

    Get PDF
    Pada penelitian sebelumnya telah dikembangkan R package gradDescent 2.0 yang memiliki 10 variasi algoritma. Namun karena metode Gradient Descent (GD) ini terus berkembang, banyak bermunculan variasi dari metode tersebut. Penelitian ini berfokus untuk melanjutkan package sebelumnya dengan menambahkan metode lain yaitu Stochastic Variance Reduce Gradient (SVRG), Semi Stochastic Gradient Descent (SSGD), Stochastic Recursive Gradient Algorithm (SARAH) dan Stochastic Recursive Gradient Algorithm+ (SARAH+) untuk melakukan prediksi pada tugas regresi. Untuk menguji R package ini dilakukan eksperimen dan simulasi untuk mencari atau memprediksi nilai faktor kompresibilitas gas CO2 berdasarkan parameter tekanan dan suhu yang didapatkan. Berdasarkan hasil penelitian yang dilakukan, gradDescent 3.0 berhasil dikembangkan. Eksperimen dan simulasi R Package pada studi kasus faktor kompresibilitas gas CO2 telah selesai dilakukan dengan hasil rata-rata untuk nilai RMSE sebesar 0.168138 dan waktu eksekusi sebesar 1.347653 detik.----In the previous research has been developed R package gradDescent 2.0 which has 10 variations of algorithm. However, as the Gradient Descent (GD) method continues to evolve, many emerging variations of the method. This study focuses on continuing the previous package by adding other methods of Stochastic Variance Reduce Gradient (SVRG), Semi Stochastic Gradient Descent (SSGD), Stochastic Recursive Gradient Algorithm (SARAH) and Stochastic Recursive Gradient Algorithm+ (SARAH+) to predict the regression task. To test this R package experiments and simulations to find or predict the value of CO2 gas compressibility factor based on the pressure and temperature parameters obtained. Based on the results of research conducted, gradDescent 3.0 successfully developed. Experiments and simulation of R Package on case study of CO2 gas compressibility factor has been done with the average result for RMSE value of 0.168138 and execution time of 1.347653 seconds

    Meta-descent for Online, Continual Prediction

    Full text link
    This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update---a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the step-size parameters to minimize prediction error. These meta-descent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental meta-descent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.Comment: AAAI Conference on Artificial Intelligence 2019. v2: Correction to Baird's counterexample. A bug in the code lead to results being reported for AMSGrad in this experiment, when they were actually results for Ada
    corecore