302 research outputs found
Non-liftable Calabi-Yau spaces
We construct many new non-liftable three-dimensional Calabi-Yau spaces in
positive characteristic. The technique relies on lifting a nodal model to a
smooth rigid Calabi-Yau space over some number field as introduced by the first
author and D. van Straten.Comment: 16 pages, 5 tables; v2: minor corrections and addition
SchNet - a deep learning architecture for molecules and materials
Deep learning has led to a paradigm shift in artificial intelligence,
including web, text and image search, speech recognition, as well as
bioinformatics, with growing impact in chemical physics. Machine learning in
general and deep learning in particular is ideally suited for representing
quantum-mechanical interactions, enabling to model nonlinear potential-energy
surfaces or enhancing the exploration of chemical compound space. Here we
present the deep learning architecture SchNet that is specifically designed to
model atomistic systems by making use of continuous-filter convolutional
layers. We demonstrate the capabilities of SchNet by accurately predicting a
range of properties across chemical space for \emph{molecules and materials}
where our model learns chemically plausible embeddings of atom types across the
periodic table. Finally, we employ SchNet to predict potential-energy surfaces
and energy-conserving force fields for molecular dynamics simulations of small
molecules and perform an exemplary study of the quantum-mechanical properties
of C-fullerene that would have been infeasible with regular ab initio
molecular dynamics
Building nonparametric -body force fields using Gaussian process regression
Constructing a classical potential suited to simulate a given atomic system
is a remarkably difficult task. This chapter presents a framework under which
this problem can be tackled, based on the Bayesian construction of
nonparametric force fields of a given order using Gaussian process (GP) priors.
The formalism of GP regression is first reviewed, particularly in relation to
its application in learning local atomic energies and forces. For accurate
regression it is fundamental to incorporate prior knowledge into the GP kernel
function. To this end, this chapter details how properties of smoothness,
invariance and interaction order of a force field can be encoded into
corresponding kernel properties. A range of kernels is then proposed,
possessing all the required properties and an adjustable parameter
governing the interaction order modelled. The order best suited to describe
a given system can be found automatically within the Bayesian framework by
maximisation of the marginal likelihood. The procedure is first tested on a toy
model of known interaction and later applied to two real materials described at
the DFT level of accuracy. The models automatically selected for the two
materials were found to be in agreement with physical intuition. More in
general, it was found that lower order (simpler) models should be chosen when
the data are not sufficient to resolve more complex interactions. Low GPs
can be further sped up by orders of magnitude by constructing the corresponding
tabulated force field, here named "MFF".Comment: 31 pages, 11 figures, book chapte
The H1 Forward Proton Spectrometer at HERA
The forward proton spectrometer is part of the H1 detector at the HERA
collider. Protons with energies above 500 GeV and polar angles below 1 mrad can
be detected by this spectrometer. The main detector components are
scintillating fiber detectors read out by position-sensitive photo-multipliers.
These detectors are housed in so-called Roman Pots which allow them to be moved
close to the circulating proton beam. Four Roman Pot stations are located at
distances between 60 m and 90 m from the interaction point.Comment: 20 pages, 10 figures, submitted to Nucl.Instr.and Method
Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics
Dozens of new models on fixation prediction are published every year and
compared on open benchmarks such as MIT300 and LSUN. However, progress in the
field can be difficult to judge because models are compared using a variety of
inconsistent metrics. Here we show that no single saliency map can perform well
under all metrics. Instead, we propose a principled approach to solve the
benchmarking problem by separating the notions of saliency models, maps and
metrics. Inspired by Bayesian decision theory, we define a saliency model to be
a probabilistic model of fixation density prediction and a saliency map to be a
metric-specific prediction derived from the model density which maximizes the
expected performance on that metric given the model density. We derive these
optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC,
NSS, CC, SIM, KL-Div) and show that they can be computed analytically or
approximated with high precision. We show that this leads to consistent
rankings in all metrics and avoids the penalties of using one saliency map for
all metrics. Our method allows researchers to have their model compete on many
different metrics with state-of-the-art in those metrics: "good" models will
perform well in all metrics.Comment: published at ECCV 201
- …