11 research outputs found
Accelerating Stochastic Recursive and Semi-stochastic Gradient Methods with Adaptive Barzilai-Borwein Step Sizes
The mini-batch versions of StochAstic Recursive grAdient algoritHm and
Semi-Stochastic Gradient Descent method, employed the random Barzilai-Borwein
step sizes (shorted as MB-SARAH-RBB and mS2GD-RBB), have surged into prominence
through timely step size sequence. Inspired by modern adaptors and variance
reduction techniques, we propose two new variant rules in the paper, referred
to as RHBB and RHBB+, thereby leading to four algorithms MB-SARAH-RHBB,
MB-SARAH-RHBB+, mS2GD-RHBB and mS2GD-RHBB+ respectively. RHBB+ is an enhanced
version that additionally incorporates the importance sampling technique. They
are aggressive in updates, robust in performance and self-adaptive along
iterative periods. We analyze the flexible convergence structures and the
corresponding complexity bounds in strongly convex cases. Comprehensive tuning
guidance is theoretically provided for reference in practical implementations.
Experiments show that the proposed methods consistently outperform the original
and various state-of-the-art methods on frequently tested data sets. In
particular, tests on the RHBB+ verify the efficacy of applying the importance
sampling technique to the step size level. Numerous explorations display the
promising scalability of our iterative adaptors.Comment: 44 pages, 33 figure
Less but Better: Generalization Enhancement of Ordinal Embedding via Distributional Margin
In the absence of prior knowledge, ordinal embedding methods obtain new
representation for items in a low-dimensional Euclidean space via a set of
quadruple-wise comparisons. These ordinal comparisons often come from human
annotators, and sufficient comparisons induce the success of classical
approaches. However, collecting a large number of labeled data is known as a
hard task, and most of the existing work pay little attention to the
generalization ability with insufficient samples. Meanwhile, recent progress in
large margin theory discloses that rather than just maximizing the minimum
margin, both the margin mean and variance, which characterize the margin
distribution, are more crucial to the overall generalization performance. To
address the issue of insufficient training samples, we propose a margin
distribution learning paradigm for ordinal embedding, entitled Distributional
Margin based Ordinal Embedding (\textit{DMOE}). Precisely, we first define the
margin for ordinal embedding problem. Secondly, we formulate a concise
objective function which avoids maximizing margin mean and minimizing margin
variance directly but exhibits the similar effect. Moreover, an Augmented
Lagrange Multiplier based algorithm is customized to seek the optimal solution
of \textit{DMOE} effectively. Experimental studies on both simulated and
real-world datasets are provided to show the effectiveness of the proposed
algorithm.Comment: Accepted by AAAI 201
Robust Ordinal Embedding from Contaminated Relative Comparisons
Existing ordinal embedding methods usually follow a two-stage routine:
outlier detection is first employed to pick out the inconsistent comparisons;
then an embedding is learned from the clean data. However, learning in a
multi-stage manner is well-known to suffer from sub-optimal solutions. In this
paper, we propose a unified framework to jointly identify the contaminated
comparisons and derive reliable embeddings. The merits of our method are
three-fold: (1) By virtue of the proposed unified framework, the sub-optimality
of traditional methods is largely alleviated; (2) The proposed method is aware
of global inconsistency by minimizing a corresponding cost, while traditional
methods only involve local inconsistency; (3) Instead of considering the
nuclear norm heuristics, we adopt an exact solution for rank equality
constraint. Our studies are supported by experiments with both simulated
examples and real-world data. The proposed framework provides us a promising
tool for robust ordinal embedding from the contaminated comparisons.Comment: Accepted by AAAI 201
MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications
Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
Generalized averaged Gaussian quadrature and applications
A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal