329,784 research outputs found
Geometric Linearization of Ordinary Differential Equations
The linearizability of differential equations was first considered by Lie for
scalar second order semi-linear ordinary differential equations. Since then
there has been considerable work done on the algebraic classification of
linearizable equations and even on systems of equations. However, little has
been done in the way of providing explicit criteria to determine their
linearizability. Using the connection between isometries and symmetries of the
system of geodesic equations criteria were established for second order
quadratically and cubically semi-linear equations and for systems of equations.
The connection was proved for maximally symmetric spaces and a conjecture was
put forward for other cases. Here the criteria are briefly reviewed and the
conjecture is proved.Comment: This is a contribution to the Proc. of the Seventh International
Conference ''Symmetry in Nonlinear Mathematical Physics'' (June 24-30, 2007,
Kyiv, Ukraine), published in SIGMA (Symmetry, Integrability and Geometry:
Methods and Applications) at http://www.emis.de/journals/SIGMA
Tensor Networks for Medical Image Classification
With the increasing adoption of machine learning tools like neural networks
across several domains, interesting connections and comparisons to concepts
from other domains are coming to light. In this work, we focus on the class of
Tensor Networks, which has been a work horse for physicists in the last two
decades to analyse quantum many-body systems. Building on the recent interest
in tensor networks for machine learning, we extend the Matrix Product State
tensor networks (which can be interpreted as linear classifiers operating in
exponentially high dimensional spaces) to be useful in medical image analysis
tasks. We focus on classification problems as a first step where we motivate
the use of tensor networks and propose adaptions for 2D images using classical
image domain concepts such as local orderlessness of images. With the proposed
locally orderless tensor network model (LoTeNet), we show that tensor networks
are capable of attaining performance that is comparable to state-of-the-art
deep learning methods. We evaluate the model on two publicly available medical
imaging datasets and show performance improvements with fewer model
hyperparameters and lesser computational resources compared to relevant
baseline methods.Comment: Accepted for publication at International Conference on Medical
Imaging with Deep Learning (MIDL), 2020. Reviews on Openreview here:
https://openreview.net/forum?id=jjk6bxk07
Active classification with comparison queries
We study an extension of active learning in which the learning algorithm may
ask the annotator to compare the distances of two examples from the boundary of
their label-class. For example, in a recommendation system application (say for
restaurants), the annotator may be asked whether she liked or disliked a
specific restaurant (a label query); or which one of two restaurants did she
like more (a comparison query).
We focus on the class of half spaces, and show that under natural
assumptions, such as large margin or bounded bit-description of the input
examples, it is possible to reveal all the labels of a sample of size using
approximately queries. This implies an exponential improvement over
classical active learning, where only label queries are allowed. We complement
these results by showing that if any of these assumptions is removed then, in
the worst case, queries are required.
Our results follow from a new general framework of active learning with
additional queries. We identify a combinatorial dimension, called the
\emph{inference dimension}, that captures the query complexity when each
additional query is determined by examples (such as comparison queries,
each of which is determined by the two compared examples). Our results for half
spaces follow by bounding the inference dimension in the cases discussed above.Comment: 23 pages (not including references), 1 figure. The new version
contains a minor fix in the proof of Lemma 4.
- …