64 research outputs found
Refactorings of Design Defects using Relational Concept Analysis
Software engineers often need to identify and correct design defects, ıe} recurring design problems that hinder development and maintenance\ud
by making programs harder to comprehend and--or evolve. While detection\ud
of design defects is an actively researched area, their correction---mainly\ud
a manual and time-consuming activity --- is yet to be extensively\ud
investigated for automation. In this paper, we propose an automated\ud
approach for suggesting defect-correcting refactorings using relational\ud
concept analysis (RCA). The added value of RCA consists in exploiting\ud
the links between formal objects which abound in a software re-engineering\ud
context. We validated our approach on instances of the <span class='textit'></span>Blob\ud
design defect taken from four different open-source programs
A Semantic Framework for the Security Analysis of Ethereum smart contracts
Smart contracts are programs running on cryptocurrency (e.g., Ethereum)
blockchains, whose popularity stem from the possibility to perform financial
transactions, such as payments and auctions, in a distributed environment
without need for any trusted third party. Given their financial nature, bugs or
vulnerabilities in these programs may lead to catastrophic consequences, as
witnessed by recent attacks. Unfortunately, programming smart contracts is a
delicate task that requires strong expertise: Ethereum smart contracts are
written in Solidity, a dedicated language resembling JavaScript, and shipped
over the blockchain in the EVM bytecode format. In order to rigorously verify
the security of smart contracts, it is of paramount importance to formalize
their semantics as well as the security properties of interest, in particular
at the level of the bytecode being executed.
In this paper, we present the first complete small-step semantics of EVM
bytecode, which we formalize in the F* proof assistant, obtaining executable
code that we successfully validate against the official Ethereum test suite.
Furthermore, we formally define for the first time a number of central security
properties for smart contracts, such as call integrity, atomicity, and
independence from miner controlled parameters. This formalization relies on a
combination of hyper- and safety properties. Along this work, we identified
various mistakes and imprecisions in existing semantics and verification tools
for Ethereum smart contracts, thereby demonstrating once more the importance of
rigorous semantic foundations for the design of security verification
techniques.Comment: The EAPLS Best Paper Award at ETAP
Invasive Computing in HPC with X10
High performance computing with thousands of cores relies on distributed
memory due to memory consistency reasons. The resource
management on such systems usually relies on static assignment of
resources at the start of each application. Such a static scheduling
is incapable of starting applications with required resources being
used by others since a reduction of resources assigned to applications
without stopping them is not possible. This lack of dynamic
adaptive scheduling leads to idling resources until the remaining
amount of requested resources gets available. Additionally, applications
with changing resource requirements lead to idling or less
efficiently used resources. The invasive computing paradigm suggests
dynamic resource scheduling and applications able to dynamically
adapt to changing resource requirements.
As a case study, we developed an invasive resource manager as
well as a multigrid with dynamically changing resource demands.
Such a multigrid has changing scalability behavior during its execution
and requires data migration upon reallocation due to distributed
memory systems.
To counteract the additional complexity introduced by the additional
interfaces, e. g. for data migration, we use the X10 programming
language for improved programmability. Our results show
improved application throughput and the dynamic adaptivity. In addition,
we show our extension for the distributed arrays of X10 to
support data migrationThis work was supported by the German Research Foundation
(DFG) as part of the Transregional Collaborative Research Centre
âInvasive Computingâ (SFB/TR 89)
Using theorem provers to increase the precision of dependence analysis for information flow control
Information flow control (IFC) is a category of techniques for enforcing information flow properties. In this paper we present the Combined Approach, a novel IFC technique that combines a scalable system-dependence-graph-based (SDG-based) approach with a precise logic-based approach based on a theorem prover. The Combined Approach has an increased precision compared with the SDG-based approach on its own, without sacrificing its scalability. For every potential illegal information flow reported by the SDG-based approach, the Combined Approach automatically generates proof obligations that, if valid, prove that there is no program path for which the reported information flow can happen. These proof obligations are then relayed to the logic-based approach. We also show how the SDG-based approach can provide additional information to the theorem prover that helps decrease the verification effort. Moreover, we present a prototypical implementation of the Combined Approach that uses the tools JOANA and KeY as the SDG-based and logic-based approach respectively
Integration of Static and Dynamic Analysis Techniques for Checking Noninterference
In this article, we present an overview of recent combinations of deductive program verification and automatic test generation on the one hand and static analysis on the other hand, with the goal of checking noninterference. Noninterference is the non-functional property that certain confidential information cannot leak to certain public output, i.e., the confidentiality of that information is always preserved.
We define the noninterference properties that are checked along with the individual approaches that we use in different combinations. In one use case, our framework for checking noninterference employs deductive verification to automatically generate tests for noninterference violations with an improved test coverage. In another use case, the framework provides two combinations of deductive verification with static analysis based on system dependence graphs to prove noninterference, thereby reducing the effort for deductive verification
Introduction to Milestones in Interactive Theorem Proving
On March 8, 2018, Tobias Nipkow celebrated his sixtieth birthday. In anticipation of the occasion, in January 2016, two of his former students, Gerwin Klein and Jasmin Blanchette, and one of his former postdocs, Andrei Popescu, approached the editorial board of the Journal of Automated Reasoning with a proposal to publish a surprise Festschrift issue in his honor. The e-mail was sent to twenty-six members of the board, leaving out one, for reasons that will become clear in a moment. It is a sign of the love and respect that Tobias commands from his colleagues that within two days every recipient of the e-mail had responded favorably and enthusiastically to the proposal
Using Relational Verification for Program Slicing
Program slicing is the process of removing statements from a program such that defined aspects of its behavior are retained. For producing precise slices, i.e., slices that are minimal in size, the program\u27s semantics must be considered. Existing approaches that go beyond a syntactical analysis and do take the semantics into account are not fully automatic and require auxiliary specifications from the user. In this paper, we adapt relational verification to check whether a slice candidate obtained by removing some instructions from a program is indeed a valid slice. Based on this, we propose a framework for precise and automatic program slicing. As part of this framework, we present three strategies for the generation of slice candidates, and we show how dynamic slicing approaches - that interweave generating and checking slice candidates - can be used for this purpose. The framework can easily be extended with other strategies for generating slice candidates. We discuss the strengths and weaknesses of slicing approaches that use our framework
- âŠ