221,729 research outputs found
Friction performance of electroless Ni-P coatings in alkaline medium and optimization of coating parameters
AbstractThe present paper studies the friction performance of electroless Ni-P coating in alkaline medium (10% NaOH solution) and optimization of the coating process parameters is performed for minimum friction using Taguchi method based on L27 orthogonal array. The study is carried out using different combinations of four coating process parameters, namely, concentration of nickel source (A), concentration reducing agent (B), bath temperature (C) and annealing temperature (D). The friction tests are conducted with a pin-on-disk tribometer. The optimum combination of process parameters for minimum friction is obtained. Also, analysis of variance (ANOVA) is performed to find out the significant contribution of each coating process parameters and their interactions. ANOVA reveals that bath temperature has the maximum contribution in controlling the friction behaviour of Ni–P coating. The surface morphology and composition of coatings are studied with the help of scanning electron microscopy (SEM), energy dispersed X-ray (EDX) analysis and X-ray diffraction (XRD) analysis. It is found that the Ni-P coating is amorphous in as-deposited condition but gradually turns crystalline with heat treatment
Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case
The analysis in Part I revealed interesting properties for subgradient
learning algorithms in the context of stochastic optimization when gradient
noise is present. These algorithms are used when the risk functions are
non-smooth and involve non-differentiable components. They have been long
recognized as being slow converging methods. However, it was revealed in Part I
that the rate of convergence becomes linear for stochastic optimization
problems, with the error iterate converging at an exponential rate
to within an neighborhood of the optimizer, for some and small step-size . The conclusion was established under weaker
assumptions than the prior literature and, moreover, several important problems
(such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker
assumptions automatically (but not the previously used conditions from the
literature). These results revealed that sub-gradient learning methods have
more favorable behavior than originally thought when used to enable continuous
adaptation and learning. The results of Part I were exclusive to single-agent
adaptation. The purpose of the current Part II is to examine the implications
of these discoveries when a collection of networked agents employs subgradient
learning as their cooperative mechanism. The analysis will show that, despite
the coupled dynamics that arises in a networked scenario, the agents are still
able to attain linear convergence in the stochastic case; they are also able to
reach agreement within of the optimizer
Continuous-time Proportional-Integral Distributed Optimization for Networked Systems
In this paper we explore the relationship between dual decomposition and the
consensus-based method for distributed optimization. The relationship is
developed by examining the similarities between the two approaches and their
relationship to gradient-based constrained optimization. By formulating each
algorithm in continuous-time, it is seen that both approaches use a gradient
method for optimization with one using a proportional control term and the
other using an integral control term to drive the system to the constraint set.
Therefore, a significant contribution of this paper is to combine these methods
to develop a continuous-time proportional-integral distributed optimization
method. Furthermore, we establish convergence using Lyapunov stability
techniques and utilizing properties from the network structure of the
multi-agent system.Comment: 23 Pages, submission to Journal of Control and Decision, under
review. Takes comments from previous review process into account. Reasons for
a continuous approach are given and minor technical details are remedied.
Largest revision is reformatting for the Journal of Control and Decisio
Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case
The analysis in Part I revealed interesting properties for subgradient
learning algorithms in the context of stochastic optimization when gradient
noise is present. These algorithms are used when the risk functions are
non-smooth and involve non-differentiable components. They have been long
recognized as being slow converging methods. However, it was revealed in Part I
that the rate of convergence becomes linear for stochastic optimization
problems, with the error iterate converging at an exponential rate
to within an neighborhood of the optimizer, for some and small step-size . The conclusion was established under weaker
assumptions than the prior literature and, moreover, several important problems
(such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker
assumptions automatically (but not the previously used conditions from the
literature). These results revealed that sub-gradient learning methods have
more favorable behavior than originally thought when used to enable continuous
adaptation and learning. The results of Part I were exclusive to single-agent
adaptation. The purpose of the current Part II is to examine the implications
of these discoveries when a collection of networked agents employs subgradient
learning as their cooperative mechanism. The analysis will show that, despite
the coupled dynamics that arises in a networked scenario, the agents are still
able to attain linear convergence in the stochastic case; they are also able to
reach agreement within of the optimizer
- …