158 research outputs found
-Algebras of Classical Field Theories and the Batalin-Vilkovisky Formalism
We review in detail the Batalin-Vilkovisky formalism for Lagrangian field
theories and its mathematical foundations with an emphasis on higher algebraic
structures and classical field theories. In particular, we show how a field
theory gives rise to an -algebra and how quasi-isomorphisms between
-algebras correspond to classical equivalences of field theories. A
few experts may be familiar with parts of our discussion, however, the material
is presented from the perspective of a very general notion of a gauge theory.
We also make a number of new observations and present some new results. Most
importantly, we discuss in great detail higher (categorified) Chern-Simons
theories and give some useful shortcuts in usually rather involved
computations.Comment: v3: 131 pages, minor improvements, published versio
Six-Dimensional (1,0) Superconformal Models and Higher Gauge Theory
We analyze the gauge structure of a recently proposed superconformal field
theory in six dimensions. We find that this structure amounts to a weak
Courant-Dorfman algebra, which, in turn, can be interpreted as a strong
homotopy Lie algebra. This suggests that the superconformal field theory is
closely related to higher gauge theory, describing the parallel transport of
extended objects. Indeed we find that, under certain restrictions, the field
content and gauge transformations reduce to those of higher gauge theory. We
also present a number of interesting examples of admissible gauge structures
such as the structure Lie 2-algebra of an abelian gerbe, differential crossed
modules, the 3-algebras of M2-brane models and string Lie 2-algebras.Comment: 31+1 pages, presentation slightly improved, version published in JM
Synaptic Sampling of Neural Networks
Probabilistic artificial neural networks offer intriguing prospects for
enabling the uncertainty of artificial intelligence methods to be described
explicitly in their function; however, the development of techniques that
quantify uncertainty by well-understood methods such as Monte Carlo sampling
has been limited by the high costs of stochastic sampling on deterministic
computing hardware. Emerging computing systems that are amenable to
hardware-level probabilistic computing, such as those that leverage stochastic
devices, may make probabilistic neural networks more feasible in the
not-too-distant future. This paper describes the scANN technique --
\textit{sampling (by coinflips) artificial neural networks} -- which enables
neural networks to be sampled directly by treating the weights as Bernoulli
coin flips. This method is natively well suited for probabilistic computing
techniques that focus on tunable stochastic devices, nearly matches fully
deterministic performance while also describing the uncertainty of correct and
incorrect neural network outputs.Comment: 9 pages, accepted to 2023 IEEE International Conference on Rebooting
Computin
Induced Distributions from Generalized Unfair Dice
In this paper we analyze the probability distributions associated with
rolling (possibly unfair) dice infinitely often. Specifically, given a
-sided die, if denotes the outcome of the
toss, then the distribution function is , where . We show that is singular and
establish a piecewise linear, iterative construction for it. We investigate two
ways of comparing to the fair distribution -- one using supremum norms and
another using arclength. In the case of coin flips, we also address the case
where each independent flip could come from a different distribution. In part,
this work aims to address outstanding claims in the literature on Bernoulli
schemes. The results herein are motivated by emerging needs, desires, and
opportunities in computation to leverage physical stochasticity in
microelectronic devices for random number generation.Comment: 18 pages, 1 figur
Neurogenesis Deep Learning
Neural machine learning methods, such as deep neural networks (DNN), have
achieved remarkable success in a number of complex data processing tasks. These
methods have arguably had their strongest impact on tasks such as image and
audio processing - data processing domains in which humans have long held clear
advantages over conventional algorithms. In contrast to biological neural
systems, which are capable of learning continuously, deep artificial networks
have a limited ability for incorporating new information in an already trained
network. As a result, methods for continuous learning are potentially highly
impactful in enabling the application of deep networks to dynamic data sets.
Here, inspired by the process of adult neurogenesis in the hippocampus, we
explore the potential for adding new neurons to deep layers of artificial
neural networks in order to facilitate their acquisition of novel information
while preserving previously trained data representations. Our results on the
MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes
lower and upper case letters and digits, demonstrate that neurogenesis is well
suited for addressing the stability-plasticity dilemma that has long challenged
adaptive machine learning algorithms.Comment: 8 pages, 8 figures, Accepted to 2017 International Joint Conference
on Neural Networks (IJCNN 2017
Modular classes of skew algebroid relations
Skew algebroid is a natural generalization of the concept of Lie algebroid.
In this paper, for a skew algebroid E, its modular class mod(E) is defined in
the classical as well as in the supergeometric formulation. It is proved that
there is a homogeneous nowhere-vanishing 1-density on E* which is invariant
with respect to all Hamiltonian vector fields if and only if E is modular, i.e.
mod(E)=0. Further, relative modular class of a subalgebroid is introduced and
studied together with its application to holonomy, as well as modular class of
a skew algebroid relation. These notions provide, in particular, a unified
approach to the concepts of a modular class of a Lie algebroid morphism and
that of a Poisson map.Comment: 20 page
Topological Field Theories and Geometry of Batalin-Vilkovisky Algebras
The algebraic and geometric structures of deformations are analyzed
concerning topological field theories of Schwarz type by means of the
Batalin-Vilkovisky formalism. Deformations of the Chern-Simons-BF theory in
three dimensions induces the Courant algebroid structure on the target space as
a sigma model. Deformations of BF theories in dimensions are also analyzed.
Two dimensional deformed BF theory induces the Poisson structure and three
dimensional deformed BF theory induces the Courant algebroid structure on the
target space as a sigma model. The deformations of BF theories in
dimensions induce the structures of Batalin-Vilkovisky algebras on the target
space.Comment: 25 page
Classical BV theories on manifolds with boundary
In this paper we extend the classical BV framework to gauge theories on
spacetime manifolds with boundary. In particular, we connect the BV
construction in the bulk with the BFV construction on the boundary and we
develop its extension to strata of higher codimension in the case of manifolds
with corners. We present several examples including electrodynamics, Yang-Mills
theory and topological field theories coming from the AKSZ construction, in
particular, the Chern-Simons theory, the theory, and the Poisson sigma
model. This paper is the first step towards developing the perturbative
quantization of such theories on manifolds with boundary in a way consistent
with gluing.Comment: The second version has many typos corrected, references added. Some
typos are probably still there, in particular, signs in examples. In the
third version more typoes are corrected and the exposition is slightly
change
- …