1,186 research outputs found
Effective mesonic theory for the 't Hooft model on the lattice
We apply to a lattice version of the 't~Hooft model, QCD in two space-time
dimensions for large number of colours, a method recently proposed to obtain an
effective mesonic action starting from the fundamental, fermionic one. The idea
is to pass from a canonical, operatorial representation, where the low-energy
states have a direct physical interpretation in terms of a Bogoliubov vacuum
and its corresponding quasiparticle excitations, to a functional, path integral
representation, via the formalism of the transfer matrix. In this way we obtain
a lattice effective theory for mesons in a self-consistent setting. We also
verify that well-known results from other different approaches are reproduced
in the continuum limit.Comment: 21 pages, 2 figure
Lattice QCD effective action with Bogoliubov transformations
In the Wilson's lattice formulation of QCD, a fermionic Fock space of states
can be explicitly built at each time slice using canonical creation and
annihilation operators. The partition function is then represented as the
trace of the transfer matrix, and its usual functional representation as a path
integral of can be recovered in a standard way. However, applying a
Bogoliubov transformation on the canonical operators before passing to the
functional formalism, we can isolate a vacuum contribution in the resulting
action which depends only on the parameters of the transformation and fixes
them via a variational principle. Then, inserting in the trace defining an
operator projecting on the mesons subspace at each time slice and making the
physical assumption that the true partition function is well approximate by the
projected one, we can also write an effective quadratic action for mesons. We
tested the method in the renowned 't Hooft model, namely QCD in two spacetime
dimensions for large number of colours, in Coulomb gauge.Comment: 8 pages, 1 figure. Proceedings of XIII Quark Confinement and the
Hadron Spectrum - Confinement2018, 31 July - 6 August 2018, Maynooth
University, Irelan
Beyond the storage capacity: data driven satisfiability transition
Data structure has a dramatic impact on the properties of neural networks,
yet its significance in the established theoretical frameworks is poorly
understood. Here we compute the Vapnik-Chervonenkis entropy of a kernel machine
operating on data grouped into equally labelled subsets. At variance with the
unstructured scenario, entropy is non-monotonic in the size of the training
set, and displays an additional critical point besides the storage capacity.
Remarkably, the same behavior occurs in margin classifiers even with randomly
labelled data, as is elucidated by identifying the synaptic volume encoding the
transition. These findings reveal aspects of expressivity lying beyond the
condensed description provided by the storage capacity, and they indicate the
path towards more realistic bounds for the generalization error of neural
networks.Comment: 5 pages, 2 figure
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
Random features and polynomial rules
Random features models play a distinguished role in the theory of deep
learning, describing the behavior of neural networks close to their
infinite-width limit. In this work, we present a thorough analysis of the
generalization performance of random features models for generic supervised
learning problems with Gaussian data. Our approach, built with tools from the
statistical mechanics of disordered systems, maps the random features model to
an equivalent polynomial model, and allows us to plot average generalization
curves as functions of the two main control parameters of the problem: the
number of random features and the size of the training set, both
assumed to scale as powers in the input dimension . Our results extend the
case of proportional scaling between , and . They are in accordance
with rigorous bounds known for certain particular learning tasks and are in
quantitative agreement with numerical experiments performed over many order of
magnitudes of and . We find good agreement also far from the asymptotic
limits where and at least one between , remains
finite.Comment: 11 pages + appendix, 4 figures. Comments are welcom
Restoring balance: principled under/oversampling of data for optimal classification
Class imbalance in real-world data poses a common bottleneck for machine
learning tasks, since achieving good generalization on under-represented
examples is often challenging. Mitigation strategies, such as under or
oversampling the data depending on their abundances, are routinely proposed and
tested empirically, but how they should adapt to the data statistics remains
poorly understood. In this work, we determine exact analytical expressions of
the generalization curves in the high-dimensional regime for linear classifiers
(Support Vector Machines). We also provide a sharp prediction of the effects of
under/oversampling strategies depending on class imbalance, first and second
moments of the data, and the metrics of performance considered. We show that
mixed strategies involving under and oversampling of data lead to performance
improvement. Through numerical experiments, we show the relevance of our
theoretical predictions on real datasets, on deeper architectures and with
sampling strategies based on unsupervised probabilistic models.Comment: 9 pages + appendix, 3 figure
Achieving Cost-Effective Software Reliability Through Self-Healing
Heterogeneity, mobility, complexity and new application domains raise new software reliability issues that cannot be met cost-effectively only with classic software engineering approaches. Self-healing systems can successfully address these problems, thus increasing software reliability while reducing maintenance costs. Self-healing systems must be able to automatically identify runtime failures, locate faults, and find a way to bring the system back to an acceptable behavior. This paper discusses the challenges underlying the construction of self-healing systems with particular focus on functional failures, and presents a set of techniques to build software systems that can automatically heal such failures. It introduces techniques to automatically derive assertions to effectively detect functional failures, locate the faults underlying the failures, and identify sequences of actions alternative to the failing sequence to bring the system back to an acceptable behavior
Towards a metabolomic approach to investigate iron-sulfur cluster biogenesis
Ironâsulfur clusters are prosthetic groups that are assembled on their acceptor proteins through a complex machine centered on a desulfurase enzyme and a transient scaffold protein. Studies to establish the mechanism of cluster formation have so far used either in vitro or in vivo methods, which have often resulted in contrasting or nonâcomparable results. We suggest, here, an alternative approach to study the enzymatic reaction, that is based on the combination of genetically engineered bacterial strains depleted of specific components, and the detection of the enzymatic kinetics in cellular extracts through metabolomics. Our data prove that this ex vivo approach closely reproduces the in vitro results while retaining the full complexity of the system. We demonstrate that coâpresence of bacterial frataxin and iron is necessary to observe an inhibitory effect of the enzymatic activity of bacterial frataxin. Our approach provides a new powerful tool for the study of ironâsulfur cluster biogenesis
- âŠ