57 research outputs found
Adaptive Mirror Descent for the Network Utility Maximization Problem
Network utility maximization is the most important problem in network traffic
management. Given the growth of modern communication networks, we consider the
utility maximization problem in a network with a large number of connections
(links) that are used by a huge number of users. To solve this problem an
adaptive mirror descent algorithm for many constraints is proposed. The key
feature of the algorithm is that it has a dimension-free convergence rate. The
convergence of the proposed scheme is proved theoretically. The theoretical
analysis is verified with numerical simulations. We compare the algorithm with
another approach, using the ellipsoid method (EM) for the dual problem.
Numerical experiments showed that the performance of the proposed algorithm
against EM is significantly better in large networks and when very high
solution accuracy is not required. Our approach can be used in many network
design paradigms, in particular, in software-defined networks
Near-optimal tensor methods for minimizing the gradient norm of convex function
Motivated by convex problems with linear constraints and, in particular, by
entropy-regularized optimal transport, we consider the problem of finding
-approximate stationary points, i.e. points with the norm of the
objective gradient less than , of convex functions with Lipschitz
-th order derivatives. Lower complexity bounds for this problem were
recently proposed in [Grapiglia and Nesterov, arXiv:1907.07053]. However, the
methods presented in the same paper do not have optimal complexity bounds. We
propose two optimal up to logarithmic factors methods with complexity bounds
and
with respect to the initial objective
residual and the distance between the starting point and solution respectively
Near-optimal tensor methods for minimizing gradient norm
Motivated by convex problems with linear constraints and, in particular, by entropy-regularized optimal transport, we consider the problem of finding approximate stationary points, i.e. points with the norm of the objective gradient less than small error, of convex functions with Lipschitz p-th order derivatives. Lower complexity bounds for this problem were recently proposed in [Grapiglia and Nesterov, arXiv:1907.07053]. However, the methods presented in the same paper do not have optimal complexity bounds. We propose two optimal up to logarithmic factors methods with complexity bounds with respect to the initial objective residual and the distance between the starting point and solution respectivel
Этические аспекты публикационной деятельности вузовской библиотеки
On the history of creation of digital library of the St. Petersburg State Polytechnical University. For the first time ethical aspects of university library activities on publication on the web site the results of intellectual works of teachers and students are considered.Впервые рассмотрены этические аспекты деятельности библиотеки вуза по опубликованию на сайте результатов интеллектуальной деятельности преподавателей и студентов. Рассказывается об истории создания электронной библиотеки Санкт-Петербургского государственного политехнического университета
Oracle complexity separation in convex optimization
Ubiquitous in machine learning regularized empirical risk minimization problems are often composed of several blocks which can be treated using different types of oracles, e.g., full gradient, stochastic gradient or coordinate derivative. Optimal oracle complexity is known and achievable separately for the full gradient case, the stochastic gradient case, etc. We propose a generic framework to combine optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for each block, i.e. for each block the corresponding oracle is called the optimal number of times for a given accuracy. As a particular example, we demonstrate that for a combination of a full gradient oracle and either a stochastic gradient oracle or a coordinate descent oracle our approach leads to the optimal number of oracle calls separately for the full gradient part and the stochastic/coordinate descent part
Oracle Complexity Separation in Convex Optimization
Many convex optimization problems have structured objective function written
as a sum of functions with different types of oracles (full gradient,
coordinate derivative, stochastic gradient) and different evaluation complexity
of these oracles. In the strongly convex case these functions also have
different condition numbers, which eventually define the iteration complexity
of first-order methods and the number of oracle calls required to achieve given
accuracy. Motivated by the desire to call more expensive oracle less number of
times, in this paper we consider minimization of a sum of two functions and
propose a generic algorithmic framework to separate oracle complexities for
each component in the sum. As a specific example, for the -strongly convex
problem with -smooth function
and -smooth function , a special case of our algorithm requires, up to
a logarithmic factor, first-order oracle calls for and
first-order oracle calls for . Our general framework
covers also the setting of strongly convex objectives, the setting when is
given by coordinate derivative oracle, and the setting when has a
finite-sum structure and is available through stochastic gradient oracle. In
the latter two cases we obtain respectively accelerated random coordinate
descent and accelerated variance reduction methods with oracle complexity
separation
- …