38 research outputs found
The MPI BUGS INITIATIVE: a Framework for MPI Verification Tools Evaluation
International audienceEnsuring the correctness of MPI programs becomes as challenging and important as achieving the best performance. Many tools have been proposed in the literature to detect incorrect usages of MPI in a given program. However, the limited set of code samples each tool provides and the lack of metadata stating the intent of each test make it difficult to assess the strengths and limitations of these tools. In this paper, we present the MPI BUGS INITIATIVE, a complete collection of MPI codes to assess the status of MPI verification tools. We introduce a classification of MPI errors and provide correct and incorrect codes covering many MPI features and our categorization of errors. The resulting suite comprises 1,668 codes, each coming with a well-formatted header that clarifies the intent of each code and specifies how to execute and evaluate it. We evaluated the completeness of the MPI BUGS INITIATIVE against eight stateof-the-art MPI verification tools
Proceedings of The Rust-Edu Workshop
The 2022 Rust-Edu Workshop was an experiment. We wanted to gather together as many thought leaders we could attract in the area of Rust education, with an emphasis on academic-facing ideas. We hoped that productive discussions and future collaborations would result. Given the quick preparation and the difficulties of an international remote event, I am very happy to report a grand success. We had more than 27 participants from timezones around the globe. We had eight talks, four refereed papers and statements from 15 participants. Everyone seemed to have a good time, and I can say that I learned a ton. These proceedings are loosely organized: they represent a mere compilation of the excellent submitted work. I hope youâll find this material as pleasant and useful as I have. Bart Massey 30 August 202
Cyber Security of Critical Infrastructures
Critical infrastructures are vital assets for public safety, economic welfare, and the national security of countries. The vulnerabilities of critical infrastructures have increased with the widespread use of information technologies. As Critical National Infrastructures are becoming more vulnerable to cyber-attacks, their protection becomes a significant issue for organizations as well as nations. The risks to continued operations, from failing to upgrade aging infrastructure or not meeting mandated regulatory regimes, are considered highly significant, given the demonstrable impact of such circumstances. Due to the rapid increase of sophisticated cyber threats targeting critical infrastructures with significant destructive effects, the cybersecurity of critical infrastructures has become an agenda item for academics, practitioners, and policy makers. A holistic view which covers technical, policy, human, and behavioural aspects is essential to handle cyber security of critical infrastructures effectively. Moreover, the ability to attribute crimes to criminals is a vital element of avoiding impunity in cyberspace. In this book, both research and practical aspects of cyber security considerations in critical infrastructures are presented. Aligned with the interdisciplinary nature of cyber security, authors from academia, government, and industry have contributed 13 chapters. The issues that are discussed and analysed include cybersecurity training, maturity assessment frameworks, malware analysis techniques, ransomware attacks, security solutions for industrial control systems, and privacy preservation methods
OSCAR. A Noise Injection Framework for Testing Concurrent Software
âMooreâs Lawâ is a well-known observable phenomenon in computer science that describes a
visible yearly pattern in processorâs die increase. Even though it has held true for the last 57
years, thermal limitations on how much a processorâs core frequencies can be increased, have
led to physical limitations to their performance scaling. The industry has since then shifted
towards multicore architectures, which offer much better and scalable performance, while in
turn forcing programmers to adopt the concurrent programming paradigm when designing new
software, if they wish to make use of this added performance. The use of this paradigm comes
with the unfortunate downside of the sudden appearance of a plethora of additional errors in
their programs, stemming directly from their (poor) use of concurrency techniques.
Furthermore, these concurrent programs themselves are notoriously hard to design and to
verify their correctness, with researchers continuously developing new, more effective and effi-
cient methods of doing so. Noise injection, the theme of this dissertation, is one such method. It
relies on the âprobe effectâ â the observable shift in the behaviour of concurrent programs upon
the introduction of noise into their routines. The abandonment of ConTest, a popular proprietary
and closed-source noise injection framework, for testing concurrent software written using the
Java programming language, has left a void in the availability of noise injection frameworks for
this programming language.
To mitigate this void, this dissertation proposes OSCAR â a novel open-source noise injection
framework for the Java programming language, relying on static bytecode instrumentation for
injecting noise. OSCAR will provide a free and well-documented noise injection tool for research,
pedagogical and industry usage. Additionally, we propose a novel taxonomy for categorizing new
and existing noise injection heuristics, together with a new method for generating and analysing
concurrent software traces, based on string comparison metrics.
After noising programs from the IBM Concurrent Benchmark with different heuristics, we
observed that OSCAR is highly effective in increasing the coverage of the interleaving space, and
that the different heuristics provide diverse trade-offs on the cost and benefit (time/coverage) of
the noise injection process.Resumo
A âLei de Mooreâ Ă© um fenĂłmeno, bem conhecido na ĂĄrea das ciĂȘncias da computação, que
descreve um padrĂŁo evidente no aumento anual da densidade de transĂstores num processador.
Mesmo mantendo-se vĂĄlido nos Ășltimos 57 anos, o aumento do desempenho dos processadores
continua garrotado pelas limitaçÔes tĂ©rmicas inerentes `a subida da sua frequĂȘncia de funciona-
mento. Desde entĂŁo, a industria transitou para arquiteturas multi nĂșcleo, com significativamente
melhor e mais escalĂĄvel desempenho, mas obrigando os programadores a adotar o paradigma
de programação concorrente ao desenhar os seus novos programas, para poderem aproveitar o
desempenho adicional que advém do seu uso. O uso deste paradigma, no entanto, traz consigo,
por consequĂȘncia, a introdução de uma panĂłplia de novos erros nos programas, decorrentes
diretamente da utilização (inadequada) de técnicas de programação concorrente.
Adicionalmente, estes programas concorrentes sĂŁo conhecidos por serem consideravelmente
mais difĂceis de desenhar e de validar, quanto ao seu correto funcionamento, incentivando investi-
gadores ao desenvolvimento de novos métodos mais eficientes e eficazes de o fazerem. A injeção
de ruĂdo, o tema principal desta dissertação, Ă© um destes mĂ©todos. Esta baseia-se no âefeito sondaâ
(do inglĂȘs âprobe effectâ) â caracterizado por uma mudança de comportamento observĂĄvel em
programas concorrentes, ao terem ruĂdo introduzido nas suas rotinas. Com o abandono do Con-
Test, uma framework popular, proprietĂĄria e de cĂłdigo fechado, de anĂĄlise dinĂąmica de programas
concorrentes atravĂ©s de injecção de ruĂdo, escritos com recurso `a linguagem de programação Java,
viu-se surgir um vazio na oferta de framework de injeção de ruĂdo, para esta mesma linguagem.
Para mitigar este vazio, esta dissertação propĂ”e o OSCAR â uma nova framework de injeção de
ruĂdo, de cĂłdigo-aberto, para a linguagem de programação Java, que utiliza manipulação estĂĄtica
de bytecode para realizar a introdução de ruĂdo. O OSCAR pretende oferecer uma ferramenta
livre e bem documentada de injeção de ruĂdo para fins de investigação, pedagĂłgicos ou atĂ© para
a indĂșstria. Adicionalmente, a dissertação propĂ”e uma nova taxonomia para categorizar os dife-
rentes tipos de heurĂsticas de injecção de ruĂdos novos e existentes, juntamente com um mĂ©todo
para gerar e analisar traces de programas concorrentes, com base em métricas de comparação de
strings.
ApĂłs inserir ruĂdo em programas do IBM Concurrent Benchmark, com diversas heurĂsticas, ob-
servĂĄmos que o OSCAR consegue aumentar significativamente a dimensĂŁo da cobertura do espaço de estados de programas concorrentes. Adicionalmente, verificou-se que diferentes heurĂsticas
produzem um leque variado de prĂłs e contras, especialmente em termos de eficĂĄcia versus
eficiĂȘncia
Gestion de la Sécurité pour le Cyber-Espace - Du Monitorage Intelligent à la Configuration Automatique
The Internet has become a great integration platform capable of efficiently interconnecting billions of entities, from simple sensors to large data centers. This platform provides access to multiple hardware and virtualized resources (servers, networking, storage, applications, connected objects) ranging from cloud computing to Internet-of-Things infrastructures. From these resources that may be hosted and distributed amongst different providers and tenants, the building and operation of complex and value-added networked systems is enabled. These systems arehowever exposed to a large variety of security attacks, that are also gaining in sophistication and coordination. In that context, the objective of my research work is to support security management for the cyberspace, with the elaboration of new monitoring and configuration solutionsfor these systems. A first axis of this work has focused on the investigation of smart monitoring methods capable to cope with low-resource networks. In particular, we have proposed a lightweight monitoring architecture for detecting security attacks in low-power and lossy net-works, by exploiting different features provided by a routing protocol specifically developed for them. A second axis has concerned the assessment and remediation of vulnerabilities that may occur when changes are operated on system configurations. Using standardized vulnerability descriptions, we have designed and implemented dedicated strategies for improving the coverage and efficiency of vulnerability assessment activities based on versioning and probabilistic techniques, and for preventing the occurrence of new configuration vulnerabilities during remediation operations. A third axis has been dedicated to the automated configuration of virtualized resources to support security management. In particular, we have introduced a software-defined security approach for configuring cloud infrastructures, and have analyzed to what extent programmability facilities can contribute to their protection at the earliest stage, through the dynamic generation of specialized system images that are characterized by low attack surfaces. Complementarily, we have worked on building and verification techniques for supporting the orchestration of security chains, that are composed of virtualized network functions, such as firewalls or intrusion detection systems. Finally, several research perspectives on security automation are pointed out with respect to ensemble methods, composite services and verified artificial intelligence.LâInternet est devenu une formidable plateforme dâintĂ©gration capable dâinterconnecter efficacement des milliards dâentitĂ©s, de simples capteurs Ă de grands centres de donnĂ©es. Cette plateforme fournit un accĂšs Ă de multiples ressources physiques ou virtuelles, allant des infra-structures cloud Ă lâinternet des objets. Il est possible de construire et dâopĂ©rer des systĂšmes complexes et Ă valeur ajoutĂ©e Ă partir de ces ressources, qui peuvent ĂȘtre dĂ©ployĂ©es auprĂšs de diffĂ©rents fournisseurs. Ces systĂšmes sont cependant exposĂ©s Ă une grande variĂ©tĂ© dâattaques qui sont de plus en plus sophistiquĂ©es. Dans ce contexte, lâobjectif de mes travaux de recherche porte sur une meilleure gestion de la sĂ©curitĂ© pour le cyberespace, avec lâĂ©laboration de nouvelles solutions de monitorage et de configuration pour ces systĂšmes. Un premier axe de ce travail sâest focalisĂ© sur lâinvestigation de mĂ©thodes de monitorage capables de rĂ©pondre aux exigences de rĂ©seaux Ă faibles ressources. En particulier, nous avons proposĂ© une architecture de surveillance adaptĂ©e Ă la dĂ©tection dâattaques dans les rĂ©seaux Ă faible puissance et Ă fort taux de perte, en exploitant diffĂ©rentes fonctionnalitĂ©s fournies par un protocole de routage spĂ©cifiquement dĂ©veloppĂ©pour ceux-ci. Un second axe a ensuite concernĂ© la dĂ©tection et le traitement des vulnĂ©rabilitĂ©s pouvant survenir lorsque des changements sont opĂ©rĂ©s sur la configuration de tels systĂšmes. En sâappuyant sur des bases de descriptions de vulnĂ©rabilitĂ©s, nous avons conçu et mis en Ćuvre diffĂ©rentes stratĂ©gies permettant dâamĂ©liorer la couverture et lâefficacitĂ© des activitĂ©s de dĂ©tection des vulnĂ©rabilitĂ©s, et de prĂ©venir lâoccurrence de nouvelles vulnĂ©rabilitĂ©s lors des activitĂ©s de traitement. Un troisiĂšme axe fut consacrĂ© Ă la configuration automatique de ressources virtuelles pour la gestion de la sĂ©curitĂ©. En particulier, nous avons introduit une approche de programmabilitĂ© de la sĂ©curitĂ© pour les infrastructures cloud, et avons analysĂ© dans quelle mesure celle-ci contribue Ă une protection au plus tĂŽt des ressources, Ă travers la gĂ©nĂ©ration dynamique dâimages systĂšmes spĂ©cialisĂ©es ayant une faible surface dâattaques. De façon complĂ©mentaire, nous avonstravaillĂ© sur des techniques de construction automatique et de vĂ©rification de chaĂźnes de sĂ©curitĂ©, qui sont composĂ©es de fonctions rĂ©seaux virtuelles telles que pare-feux ou systĂšmes de dĂ©tection dâintrusion. Enfin, plusieurs perspectives de recherche relatives Ă la sĂ©curitĂ© autonome sont mises en Ă©vidence concernant lâusage de mĂ©thodes ensemblistes, la composition de services, et la vĂ©rification de techniques dâintelligence artificielle
Computer Aided Verification
The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic
Polynomial Timed Reductions to Solve Computer Security Problems in Access Control, Ethereum Smart Contract, Cloud VM Scheduling, and Logic Locking.
This thesis addresses computer security problems in: Access Control, Ethereum Smart Contracts, Cloud VM Scheduling, and Logic Locking. These problems are solved using polynomially timed reductions to 2 complexity classes: PSPACE-Complete and NP-Complete. This thesis is divided into 2 parts, problems reduced to: Model Checking (PSPACE-Complete) and Integer Linear Programming (ILP) (NP-Complete). The PSPACE-Complete problems are: Safety Analysis of Administrative Temporal Role Based Access Control (ATRBAC) Policies, and Safety Analysis of Ethereum Smart Contracts. The NP-Complete problems are: Minimizing Information Leakage in Virtual Machine (VM) Cloud Environments using VM Migrations, and Attacking Logic Locked Circuits using a Reduction to Integer Linear Programming (ILP).
In Chapter 3, I create the Cree Administrative Temporal Role Based Access Control (ATRBAC)-Safety solver. Which is a reduction from ATRBAC-Safety to Model Checking. I create 4 general performance techniques which can be utilized in any ATRBAC-Safety solver.
1. Polynomial Time Solving, which is able to solve specific archetypes of ATRBAC-Safety policies using a polynomial timed algorithm.
2. Static Pruning, which includes 2 methods for reducing the size of the policy without effecting the result of the safety query.
3. Abstraction Refinement, which can increase the speed for reachable safety queries by only solving a subset of the original policy.
4. Bound Estimation, which creates a bound on the number of steps from the initial state, where a satisfying state must exist. This is directly used by the model checker's bounded model checking mode, but can be utilized by any solver with a bound limiting parameter.
In Chapter 4, I analyze ATRBAC-Safety policies to identify some of the ``sources of complexity'' which make solving ATRBAC-Safety policies difficult. I provide analysis of the sources of complexity that exists in the previously published datasets [128,90,54]. I perform analysis of Cree's performance techniques on the previous datasets. I create 2 new datasets, which are shown to be hard instances of ATRBAC-Safety. I analyze the new datasets to show how they achieve this hardness and how they differ from each other and the previous datasets.
In Chapter 5, I create a novel reduction from a Reduced-Solidity Smart Contract, subset of available Solidity features, to Model Checking. This reduction reduces Reduced-Solidity Smart Contract into a Finite State Machine and then reduces to an instance of a Model Checking problem. This provides the ability to test smart contracts published on the Ethereum blockchain and test if there exists bugs or malicious code. I perform empirical analysis on select Smart contracts.
In Chapter 6, I create 2 methods for generating instances of ATRBAC policies into Solidity Smart Contracts. The first method is the Generic ATRBAC Smart Contract. This method requires no modification before deployment. After deployed the owner is able to create, and maintain, the policy using special access functions. The special action functions are automated with code that converts an ATRBAC policy into a series of transactions the owner can run. The second method is the Baked ATRBAC Smart Contract. This method takes an ATRBAC policy and reduces it to a Smart Contract instance with no special access functions. The smart contract can then be deployed by anyone, and that person will have no special access. I perform an empirical analysis on the setup costs, transaction costs, and security each provides.
In Chapter 7, I create a new reduction from Minimizing Information Leakage via Virtual Machine (VM) Migrations to Integer Linear Programming (ILP). I compare a polynomial algorithm by Moon et. al. [71], my ILP reduction, and a reduction to CNF-SAT that is not included in this thesis. The polynomial method is faster, but the problem is NP-Complete thus that solution must have sacrificed something to obtain the polynomial time speed (unless P = NP). I show instances in which the polynomial time algorithm does not produce the minimum total information leakage, but the ILP and CNF-SAT reductions are able to. In addition to this, I show that Total Information Leakage also has a security vulnerability for non-zero information leakage using the model. I propose an alternative method to Total Information Leakage, called Max Client-to-Client Information Leakage, which removes the vulnerability at the cost of increased total information leakage.
In Chapter 8, I create a reduction from the Key Recovery Attack on Logic Locked Circuits to Integer Linear Programming (ILP). This is a recreation of the ``SAT Attack'' using ILP. I provide an empirical analysis of the ILP attack and compare it to the SAT-Attack. I show that ``ILP Attack'' is a viable attack, thus future claims of ``SAT-Attack Resistant Logic Locking Techniques'' need to also show resistance to all potential NP-Complete attacks
Tools and Algorithms for the Construction and Analysis of Systems
This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 â April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers
Tools and Algorithms for the Construction and Analysis of Systems
This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
Tools and Experiments for Software Security
The computer security problems that we face begin in computer programs that we write.
The exploitation of vulnerabilities that leads to the theft of private
information and other nefarious activities often begins with a vulnerability
accidentally created in a computer program by that program's author. What
are the factors that lead to the creation of these vulnerabilities? Software
development and programming is in part a synthetic activity that we can
control with technology, i.e. different programming languages and software
development tools. Does changing the technology used to program software help
programmers write more secure code? Can we create technology that will
help programmers make fewer mistakes?
This dissertation examines these questions. We start with the Build It
Break It Fix It project, a security focused programming competition. This project
provides data on software security problems by allowing contestants
to write security focused software in any programming language. We discover that
using C leads to memory safety issues that can compromise security.
Next, we consider making C safer. We develop and examine the Checked C
programming language, a strict super-set of C that adds types for spatial safety.
We also introduce an automatic re-writing tool that can convert C code into Checked C
code. We evaluate the approach overall on benchmarks used by prior work on making
C safer.
We then consider static analysis. After an examination of different parameters of
numeric static analyzers, we develop a disjunctive abstract domain that uses a
novel merge heuristic, a notion of volumetric difference, either approximated via
MCMC sampling or precisely computed via conical decomposition. This domain is
implemented in a static analyzer for C programs and evaluated.
After static analysis, we consider fuzzing. We consider what it takes to perform
a good evaluation of a fuzzing technique with our own experiments and a review of
recent fuzzing papers. We develop a checklist for conducting new fuzzing research
and a general strategy for identifying root causes of failure found during fuzzing.
We evaluate new root cause analysis approaches using coverage information as
inputs to statistical clustering algorithms