6,567 research outputs found
Twisted hyperk\"ahler symmetries and hyperholomorphic line bundles
In this paper we propose and investigate in full generality new notions of
(continuous, non-isometric) symmetry on hyperk\"ahler spaces. These can be
grouped into two categories, corresponding to the two basic types of continuous
hyperk\"ahler isometries which they deform: tri-Hamiltonian isometries, on one
hand, and rotational isometries, on the other. The first category of
deformations gives rise to Killing spinors and generate what are known as
hidden hyperk\"ahler symmetries. The second category gives rise to
hyperholomorphic line bundles over the hyperk\"ahler manifolds on which they
are defined and, by way of the Atiyah-Ward correspondence, to holomorphic line
bundles over their twistor spaces endowed with meromorphic connections,
generalizing similar structures found in the purely rotational case by Haydys
and Hitchin. Examples of hyperk\"ahler metrics with this type of symmetry
include the c-map metrics on cotangent bundles of affine special K\"ahler
manifolds with generic prepotential function, and the hyperk\"ahler
constructions on the total spaces of certain integrable systems proposed by
Gaiotto, Moore and Neitzke in connection with the wall-crossing formulas of
Kontsevich and Soibelman, to which our investigations add a new layer of
geometric understanding.Comment: 95 pages. v3: With an extended introduction. To appear in the Journal
of Geometry and Physic
Human in the Loop: Interactive Passive Automata Learning via Evidence-Driven State-Merging Algorithms
We present an interactive version of an evidence-driven state-merging (EDSM)
algorithm for learning variants of finite state automata. Learning these
automata often amounts to recovering or reverse engineering the model
generating the data despite noisy, incomplete, or imperfectly sampled data
sources rather than optimizing a purely numeric target function. Domain
expertise and human knowledge about the target domain can guide this process,
and typically is captured in parameter settings. Often, domain expertise is
subconscious and not expressed explicitly. Directly interacting with the
learning algorithm makes it easier to utilize this knowledge effectively.Comment: 4 pages, presented at the Human in the Loop workshop at ICML 201
- …
