319 research outputs found
On the number of zeros of Melnikov functions
We provide an effective uniform upper bond for the number of zeros of the
first non-vanishing Melnikov function of a polynomial perturbations of a planar
polynomial Hamiltonian vector field. The bound depends on degrees of the field
and of the perturbation, and on the order of the Melnikov function. The
generic case was considered by Binyamini, Novikov and Yakovenko
(\cite{BNY-Inf16}). The bound follows from an effective construction of the
Gauss-Manin connection for iterated integrals
Multiplicities of Noetherian deformations
The \emph{Noetherian class} is a wide class of functions defined in terms of
polynomial partial differential equations. It includes functions appearing
naturally in various branches of mathematics (exponential, elliptic, modular,
etc.). A conjecture by Khovanskii states that the \emph{local} geometry of sets
defined using Noetherian equations admits effective estimates analogous to the
effective \emph{global} bounds of algebraic geometry.
We make a major step in the development of the theory of Noetherian functions
by providing an effective upper bound for the local number of isolated
solutions of a Noetherian system of equations depending on a parameter
, which remains valid even when the system degenerates at
. An estimate of this sort has played the key role in the
development of the theory of Pfaffian functions, and is expected to lead to
similar results in the Noetherian setting. We illustrate this by deducing from
our main result an effective form of the Lojasiewicz inequality for Noetherian
functions.Comment: v2: reworked last section, accepted to GAF
Tensorizing Neural Networks
Deep neural networks currently demonstrate state-of-the-art performance in
several domains. At the same time, models of this class are very demanding in
terms of computational resources. In particular, a large amount of memory is
required by commonly used fully-connected layers, making it hard to use the
models on low-end devices and stopping the further increase of the model size.
In this paper we convert the dense weight matrices of the fully-connected
layers to the Tensor Train format such that the number of parameters is reduced
by a huge factor and at the same time the expressive power of the layer is
preserved. In particular, for the Very Deep VGG networks we report the
compression factor of the dense weight matrix of a fully-connected layer up to
200000 times leading to the compression factor of the whole network up to 7
times
- …