21,850 research outputs found
Prototyping Formal System Models with Active Objects
We propose active object languages as a development tool for formal system
models of distributed systems. Additionally to a formalization based on a term
rewriting system, we use established Software Engineering concepts, including
software product lines and object orientation that come with extensive tool
support. We illustrate our modeling approach by prototyping a weak memory
model. The resulting executable model is modular and has clear interfaces
between communicating participants through object-oriented modeling.
Relaxations of the basic memory model are expressed as self-contained variants
of a software product line. As a modeling language we use the formal active
object language ABS which comes with an extensive tool set. This permits rapid
formalization of core ideas, early validity checks in terms of formal invariant
proofs, and debugging support by executing test runs. Hence, our approach
supports the prototyping of formal system models with early feedback.Comment: In Proceedings ICE 2018, arXiv:1810.0205
Bayesian Methods for Exoplanet Science
Exoplanet research is carried out at the limits of the capabilities of
current telescopes and instruments. The studied signals are weak, and often
embedded in complex systematics from instrumental, telluric, and astrophysical
sources. Combining repeated observations of periodic events, simultaneous
observations with multiple telescopes, different observation techniques, and
existing information from theory and prior research can help to disentangle the
systematics from the planetary signals, and offers synergistic advantages over
analysing observations separately. Bayesian inference provides a
self-consistent statistical framework that addresses both the necessity for
complex systematics models, and the need to combine prior information and
heterogeneous observations. This chapter offers a brief introduction to
Bayesian inference in the context of exoplanet research, with focus on time
series analysis, and finishes with an overview of a set of freely available
programming libraries.Comment: Invited revie
Learning to detect chest radiographs containing lung nodules using visual attention networks
Machine learning approaches hold great potential for the automated detection
of lung nodules in chest radiographs, but training the algorithms requires vary
large amounts of manually annotated images, which are difficult to obtain. Weak
labels indicating whether a radiograph is likely to contain pulmonary nodules
are typically easier to obtain at scale by parsing historical free-text
radiological reports associated to the radiographs. Using a repositotory of
over 700,000 chest radiographs, in this study we demonstrate that promising
nodule detection performance can be achieved using weak labels through
convolutional neural networks for radiograph classification. We propose two
network architectures for the classification of images likely to contain
pulmonary nodules using both weak labels and manually-delineated bounding
boxes, when these are available. Annotated nodules are used at training time to
deliver a visual attention mechanism informing the model about its localisation
performance. The first architecture extracts saliency maps from high-level
convolutional layers and compares the estimated position of a nodule against
the ground truth, when this is available. A corresponding localisation error is
then back-propagated along with the softmax classification error. The second
approach consists of a recurrent attention model that learns to observe a short
sequence of smaller image portions through reinforcement learning. When a
nodule annotation is available at training time, the reward function is
modified accordingly so that exploring portions of the radiographs away from a
nodule incurs a larger penalty. Our empirical results demonstrate the potential
advantages of these architectures in comparison to competing methodologies
Toward Optimal Run Racing: Application to Deep Learning Calibration
This paper aims at one-shot learning of deep neural nets, where a highly
parallel setting is considered to address the algorithm calibration problem -
selecting the best neural architecture and learning hyper-parameter values
depending on the dataset at hand. The notoriously expensive calibration problem
is optimally reduced by detecting and early stopping non-optimal runs. The
theoretical contribution regards the optimality guarantees within the multiple
hypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki
benchmarks demonstrate the relevance of the approach with a principled and
consistent improvement on the state of the art with no extra hyper-parameter
- …