8,259 research outputs found
Checking Computations of Formal Method Tools - A Secondary Toolchain for ProB
We present the implementation of pyB, a predicate - and expression - checker
for the B language. The tool is to be used for a secondary tool chain for data
validation and data generation, with ProB being used in the primary tool chain.
Indeed, pyB is an independent cleanroom-implementation which is used to
double-check solutions generated by ProB, an animator and model-checker for B
specifications. One of the major goals is to use ProB together with pyB to
generate reliable outputs for high-integrity safety critical applications.
Although pyB is still work in progress, the ProB/pyB toolchain has already been
successfully tested on various industrial B machines and data validation tasks.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
BEAT: An Open-Source Web-Based Open-Science Platform
With the increased interest in computational sciences, machine learning (ML),
pattern recognition (PR) and big data, governmental agencies, academia and
manufacturers are overwhelmed by the constant influx of new algorithms and
techniques promising improved performance, generalization and robustness.
Sadly, result reproducibility is often an overlooked feature accompanying
original research publications, competitions and benchmark evaluations. The
main reasons behind such a gap arise from natural complications in research and
development in this area: the distribution of data may be a sensitive issue;
software frameworks are difficult to install and maintain; Test protocols may
involve a potentially large set of intricate steps which are difficult to
handle. Given the raising complexity of research challenges and the constant
increase in data volume, the conditions for achieving reproducible research in
the domain are also increasingly difficult to meet.
To bridge this gap, we built an open platform for research in computational
sciences related to pattern recognition and machine learning, to help on the
development, reproducibility and certification of results obtained in the
field. By making use of such a system, academic, governmental or industrial
organizations enable users to easily and socially develop processing
toolchains, re-use data, algorithms, workflows and compare results from
distinct algorithms and/or parameterizations with minimal effort. This article
presents such a platform and discusses some of its key features, uses and
limitations. We overview a currently operational prototype and provide design
insights.Comment: References to papers published on the platform incorporate
An Iterative and Toolchain-Based Approach to Automate Scanning and Mapping Computer Networks
As today's organizational computer networks are ever evolving and becoming
more and more complex, finding potential vulnerabilities and conducting
security audits has become a crucial element in securing these networks. The
first step in auditing a network is reconnaissance by mapping it to get a
comprehensive overview over its structure. The growing complexity, however,
makes this task increasingly effortful, even more as mapping (instead of plain
scanning), presently, still involves a lot of manual work. Therefore, the
concept proposed in this paper automates the scanning and mapping of unknown
and non-cooperative computer networks in order to find security weaknesses or
verify access controls. It further helps to conduct audits by allowing
comparing documented with actual networks and finding unauthorized network
devices, as well as evaluating access control methods by conducting delta
scans. It uses a novel approach of augmenting data from iteratively chained
existing scanning tools with context, using genuine analytics modules to allow
assessing a network's topology instead of just generating a list of scanned
devices. It further contains a visualization model that provides a clear, lucid
topology map and a special graph for comparative analysis. The goal is to
provide maximum insight with a minimum of a priori knowledge.Comment: 7 pages, 6 figure
A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs
A number of novel programming languages and libraries have been proposed that
offer simpler-to-use models of concurrency than threads. It is challenging,
however, to devise execution models that successfully realise their
abstractions without forfeiting performance or introducing unintended
behaviours. This is exemplified by SCOOP---a concurrent object-oriented
message-passing language---which has seen multiple semantics proposed and
implemented over its evolution. We propose a "semantics workbench" with fully
and semi-automatic tools for SCOOP, that can be used to analyse and compare
programs with respect to different execution models. We demonstrate its use in
checking the consistency of semantics by applying it to a set of representative
programs, and highlighting a deadlock-related discrepancy between the principal
execution models of the language. Our workbench is based on a modular and
parameterisable graph transformation semantics implemented in the GROOVE tool.
We discuss how graph transformations are leveraged to atomically model
intricate language abstractions, and how the visual yet algebraic nature of the
model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear
Towards Automatic Learning of Heuristics for Mechanical Transformations of Procedural Code
The current trend in next-generation exascale systems goes towards
integrating a wide range of specialized (co-)processors into traditional
supercomputers. However, the integration of different specialized devices
increases the degree of heterogeneity and the complexity in programming such
type of systems. Due to the efficiency of heterogeneous systems in terms of
Watt and FLOPS per surface unit, opening the access of heterogeneous platforms
to a wider range of users is an important problem to be tackled. In order to
bridge the gap between heterogeneous systems and programmers, in this paper we
propose a machine learning-based approach to learn heuristics for defining
transformation strategies of a program transformation system. Our approach
proposes a novel combination of reinforcement learning and classification
methods to efficiently tackle the problems inherent to this type of systems.
Preliminary results demonstrate the suitability of the approach for easing the
programmability of heterogeneous systems.Comment: Part of the Program Transformation for Programmability in
Heterogeneous Architectures (PROHA) workshop, Barcelona, Spain, 12th March
2016, 9 pages, LaTe
Smart technologies for effective reconfiguration: the FASTER approach
Current and future computing systems increasingly require that their functionality stays flexible after the system is operational, in order to cope with changing user requirements and improvements in system features, i.e. changing protocols and data-coding standards, evolving demands for support of different user applications, and newly emerging applications in communication, computing and consumer electronics. Therefore, extending the functionality and the lifetime of products requires the addition of new functionality to track and satisfy the customers needs and market and technology trends. Many contemporary products along with the software part incorporate hardware accelerators for reasons of performance and power efficiency. While adaptivity of software is straightforward, adaptation of the hardware to changing requirements constitutes a challenging problem requiring delicate solutions. The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) project aims at introducing a complete methodology to allow designers to easily implement a system specification on a platform which includes a general purpose processor combined with multiple accelerators running on an FPGA, taking as input a high-level description and fully exploiting, both at design time and at run time, the capabilities of partial dynamic reconfiguration. The goal is that for selected application domains, the FASTER toolchain will be able to reduce the design and verification time of complex reconfigurable systems providing additional novel verification features that are not available in existing tool flows
- …
