14,136 research outputs found
Координатний метод локалізації значення лінійної функції, заданої на перестановках
Розглядається задача локалізації лінійної функції на перестановках. Пропонується метод її розв'язання, який є новим та кращим серед відомих.Рассматривается задача локализации линейной функции на перестановках. Предлагается координатный метод ее решения, который является новым и лучшим среди известных.The problem of localization of a linear function on permutations is considered. We propose the coordinate method of its solution, which is new and best known
О задаче локализации линейной функции на перестановках
Рассматривается задача локализации линейной функции на множестве перестановок, суть которой состоит в поиске перестановок, на которых линейная функция принимает заданное значение. Приводится схема такого поиска с наименьшим числом перебора вариантов.Дана робота присвячена описанню методу розв’язання задачі локалізації лінійної цільової функції на множині перестановок. Суть задачі полягає у наступному. На множині перестановок знайти такі локально-допустимі перестановки, на яких лінійна функція приймає задане значення. Така задача в загальному випадку може не мати розв’язку. В роботі приводиться новий розроблений метод, який дає можливість отримати розв’язок задачі (у випадку, якщо такий розв’язок існує) шляхом цілеспрямованого пошуку локально-допустимих перестановок з найменшим числом перебору варіантів, набагато меншим числа всіх варіантів.We describe a method of solving a problem of a linear target function localization on a permutation set. The task is to find those locally admissible permutations on the permutation set, for which the linear function possesses a given value. In a general case, this problem may have no solutions at all. In the article, we propose a newly developed method that allows us to obtain a solution of such a problem (in the case that such solution exists) by the goal-oriented seeking for locally admissible permutations with a minimal enumeration that is much less than the number of all possible variants
Tensor and Matrix Inversions with Applications
Higher order tensor inversion is possible for even order. We have shown that
a tensor group endowed with the Einstein (contracted) product is isomorphic to
the general linear group of degree . With the isomorphic group structures,
we derived new tensor decompositions which we have shown to be related to the
well-known canonical polyadic decomposition and multilinear SVD. Moreover,
within this group structure framework, multilinear systems are derived,
specifically, for solving high dimensional PDEs and large discrete quantum
models. We also address multilinear systems which do not fit the framework in
the least-squares sense, that is, when the tensor has an odd number of modes or
when the tensor has distinct dimensions in each modes. With the notion of
tensor inversion, multilinear systems are solvable. Numerically we solve
multilinear systems using iterative techniques, namely biconjugate gradient and
Jacobi methods in tensor format
Metal-insulator transition in a weakly interacting many-electron system with localized single-particle states
We consider low-temperature behavior of weakly interacting electrons in
disordered conductors in the regime when all single-particle eigenstates are
localized by the quenched disorder. We prove that in the absence of coupling of
the electrons to any external bath dc electrical conductivity exactly vanishes
as long as the temperatute does not exceed some finite value . At the
same time, it can be also proven that at high enough the conductivity is
finite. These two statements imply that the system undergoes a finite
temperature Metal-to-Insulator transition, which can be viewed as Anderson-like
localization of many-body wave functions in the Fock space. Metallic and
insulating states are not different from each other by any spatial or discrete
symmetries. We formulate the effective Hamiltonian description of the system at
low energies (of the order of the level spacing in the single-particle
localization volume). In the metallic phase quantum Boltzmann equation is
valid, allowing to find the kinetic coefficients. In the insulating phase,
, we use Feynmann diagram technique to determine the probability
distribution function for quantum-mechanical transition rates. The probability
of an escape rate from a given quantum state to be finite turns out to vanish
in every order of the perturbation theory in electron-electron interaction.
Thus, electron-electron interaction alone is unable to cause the relaxation and
establish the thermal equilibrium. As soon as some weak coupling to a bath is
turned on, conductivity becomes finite even in the insulating phase
An Exactly Solvable Model for the Integrability-Chaos Transition in Rough Quantum Billiards
A central question of dynamics, largely open in the quantum case, is to what
extent it erases a system's memory of its initial properties. Here we present a
simple statistically solvable quantum model describing this memory loss across
an integrability-chaos transition under a perturbation obeying no selection
rules. From the perspective of quantum localization-delocalization on the
lattice of quantum numbers, we are dealing with a situation where every lattice
site is coupled to every other site with the same strength, on average. The
model also rigorously justifies a similar set of relationships recently
proposed in the context of two short-range-interacting ultracold atoms in a
harmonic waveguide. Application of our model to an ensemble of uncorrelated
impurities on a rectangular lattice gives good agreement with ab initio
numerics.Comment: 29 pages, 5 figure
DeepPermNet: Visual Permutation Learning
We present a principled approach to uncover the structure of visual data by
solving a novel deep learning task coined visual permutation learning. The goal
of this task is to find the permutation that recovers the structure of data
from shuffled versions of it. In the case of natural images, this task boils
down to recovering the original image from patches shuffled by an unknown
permutation matrix. Unfortunately, permutation matrices are discrete, thereby
posing difficulties for gradient-based methods. To this end, we resort to a
continuous approximation of these matrices using doubly-stochastic matrices
which we generate from standard CNN predictions using Sinkhorn iterations.
Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet,
an end-to-end CNN model for this task. The utility of DeepPermNet is
demonstrated on two challenging computer vision problems, namely, (i) relative
attributes learning and (ii) self-supervised representation learning. Our
results show state-of-the-art performance on the Public Figures and OSR
benchmarks for (i) and on the classification and segmentation tasks on the
PASCAL VOC dataset for (ii).Comment: Accepted in IEEE International Conference on Computer Vision and
Pattern Recognition CVPR 201
- …