38 research outputs found
Average Contrastive Divergence for Training Restricted Boltzmann Machines
This paper studies contrastive divergence (CD) learning algorithm and proposes a new algorithm for training restricted Boltzmann machines (RBMs). We derive that CD is a biased estimator of the log-likelihood gradient method and make an analysis of the bias. Meanwhile, we propose a new learning algorithm called average contrastive divergence (ACD) for training RBMs. It is an improved CD algorithm, and it is different from the traditional CD algorithm. Finally, we obtain some experimental results. The results show that the new algorithm is a better approximation of the log-likelihood gradient method and outperforms the traditional CD algorithm
Some New Results of Mitrinović–Cusa’s and Related Inequalities Based on the Interpolation and Approximation Method
In this paper, new refinements and improvements of Mitrinović–Cusa’s and related inequalities are presented. First, we give new polynomial bounds for sinc(x) and cos(x) functions using the interpolation and approximation method. Based on the obtained results of the above two functions, we establish new bounds for Mitrinović–Cusa’s, Wilker’s, Huygens’, Wu–Srivastava’s, and Neuman–Sándor’s inequalities. The analysis results show that our bounds are tighter than the previous methods
Convergence Analysis of Contrastive Divergence Algorithm Based on Gradient Method with Errors
Contrastive Divergence has become a common way to train Restricted Boltzmann Machines; however, its convergence has
not been made clear yet. This paper studies the convergence of Contrastive Divergence algorithm. We relate Contrastive
Divergence algorithm to gradient method with errors and derive convergence conditions of Contrastive Divergence algorithm
using the convergence theorem of gradient method with errors. We give specific convergence conditions of Contrastive
Divergence learning algorithm for Restricted Boltzmann Machines in which both visible units and hidden units can only take
a finite number of values. Two new convergence conditions are obtained by specifying the learning rate. Finally, we give
specific conditions that the step number of Gibbs sampling must be satisfied in order to guarantee the Contrastive Divergence
algorithm convergence
New Refinements and Improvements of Jordan’s Inequality
The polynomial bounds of Jordan’s inequality, especially the cubic and quartic polynomial bounds, have been studied and improved in a lot of the literature; however, the linear and quadratic polynomial bounds can not be improved very much. In this paper, new refinements and improvements of Jordan’s inequality are given. We present new lower bounds and upper bounds for strengthened Jordan’s inequality using polynomials of degrees 1 and 2. Our bounds are tighter than the previous results of polynomials of degrees 1 and 2. More importantly, we give new improvements of Jordan’s inequality using polynomials of degree 5, which can achieve much tighter bounds than those previous methods
New Polynomial Bounds for Jordan’s and Kober’s Inequalities Based on the Interpolation and Approximation Method
In this paper, new refinements and improvements of Jordan’s and Kober’s inequalities are presented. We give new polynomial bounds for the s i n c ( x ) and cos ( x ) functions based on the interpolation and approximation method. The results show that our bounds are tighter than the previous methods