170 research outputs found
Slicing of Concurrent Programs and its Application to Information Flow Control
This thesis presents a practical technique for information flow control for concurrent programs with threads and shared-memory communication. The technique guarantees confidentiality of information with respect to a reasonable attacker model and utilizes program dependence
graphs (PDGs), a language-independent representation of information flow in a program
Guided Testing of Concurrent Programs Using Value Schedules
Testing concurrent programs remains a difficult task due to the non-deterministic nature of concurrent execution. Many approaches have been proposed to tackle the complexity of uncovering potential concurrency bugs. Static analysis tackles the problem by analyzing a concurrent program looking for situations/patterns that might lead to possible errors during execution. In general, static analysis cannot precisely locate all possible concurrent errors. Dynamic testing examines and controls a program during its execution also looking for situations/patterns that might lead to possible errors during execution. In general, dynamic testing needs to examine all possible execution paths to detect all errors, which is intractable.
Motivated by these observation, a new testing technique is developed that uses a collaboration between static analysis and dynamic testing to find the first potential error but using less time and space. In the new collaboration scheme, static analysis and dynamic testing interact iteratively throughout the testing process. Static analysis provides coarse-grained flow-information to guide the dynamic testing through the relevant search space, while dynamic testing collects concrete runtime-information during the guided exploration. The concrete runtime-information provides feedback to the static analysis to refine its analysis, which is then feed forward to provide more precise guidance of the dynamic testing. The new collaborative technique is able to uncover the first concurrency-related bug in a program faster using less storage than the state-of-the-art dynamic testing-tool Java PathFinder. The implementation of the collaborative technique consists of a static-analysis module based on Soot and a dynamic-analysis module based on Java PathFinder
Timing Sensitive Dependency Analysis and its Application to Software Security
Ich präsentiere neue Verfahren zur statischen Analyse von
Ausführungszeit-sensitiver Informationsflusskontrolle in Softwaresystemen.
Ich wende diese Verfahren an zur Analyse nebenläufiger Java
Programme, sowie zur Analyse von Ausführungszeit-Seitenkanälen in
Implementierungen kryptographischer Primitive.
Methoden der Informationsflusskontrolle zielen darauf ab, Fluss von
Informationen (z.B.: zwischen verschiedenen externen Schnittstellen
einer Software-Komponente) anhand expliziter Richtlinien einzuschränken.
Solche Methoden können daher zur Einhaltung sowohl
von Vertraulichkeit als auch Integrität eingesetzt werden. Der Ziel korrekter
statischer Programmanalysen in diesem Umfeld ist der Nachweis,
dass in allen Ausführungen eines gegebenen Programms die zugehörigen
Richtlinien eingehalten werden. Ein solcher Nachweis erfordert
ein Sicherheitskriterium, welches formalisiert, unter welchen
Bedingungen dies der Fall ist.
Jedem formalen Sicherheitskriterium entspricht implizit ein
Programm- und Angreifermodell. Einfachste Nichtinterferenz-Kriterien
beschreiben beispielsweise nur nicht-interaktive Programme. Dies
sind Programme die nur bei Beginn und Ende der Ausführung Ein- und
Ausgaben erlauben. Im zugehörigen Angreifer-Modell kennt der
Angreifer das Programm, aber beobachtet nur bestimmte (öffentliche)
Aus- und Eingaben oder stellt diese bereit. Ein Programm ist nichtinterferent,
wenn der Angreifer aus seinen Beobachtungen keinerlei
Rückschlüsse auf geheime Aus- und Eingaben terminierender Ausführungen
machen kann. Aus nicht-terminierenden Ausführungen
hingegen sind dem Angreifer in diesem Modell Schlussfolgerungen
auf geheime Eingaben erlaubt.
Seitenkanäle entstehen, wenn einem Angreifer aus Beobachtungen realer
Systeme Rückschlüsse auf vertrauliche Informationen ziehen kann,
welche im formalen Modell unmöglich sind. Typische Seitenkanäle
(also: in vielen formalen Sicherheitskriterien unmodelliert) sind neben
Nichttermination beispielsweise auch Energieverbrauch und die Ausführungszeit
von Programmen. Hängt diese von geheimen Eingaben
ab, so kann ein Angreifer aus der beobachteten Ausführungszeit auf
die Eingabe (z.B.: auf den Wert einzelner geheimer Parameter) schließen.
In meiner Dissertation präsentiere ich neue Abhängigkeitsanalysen,
die auch Nichtterminations- und Ausführungszeitkanäle berücksichtigen.
In Hinblick auf Nichtterminationskanäle stelle ich neue Verfahren
zur Berechnung von Programm-Abhängigkeiten vor. Hierzu entwickle
ich ein vereinheitlichendes Rahmenwerk, in welchem sowohl
Nichttermination-sensitive als auch Nichttermination-insensitive Abhängigkeiten
aus zueinander dualen Postdominanz-Begriffen resultieren.
Für Ausführungszeitkanäle entwickle ich neue Abhängigkeitsbegriffe
und dazugehörige Verfahren zu deren Berechnung. In zwei Anwendungen
untermauere ich die These:
Ausführungszeit-sensitive Abhängigkeiten ermöglichen korrekte statische
Informationsfluss-Analyse unter Berücksichtigung von Ausführungszeitkanälen.
Basierend auf Ausführungszeit-sensitiven Abhängigkeiten entwerfe
ich hierfür neue Analysen für nebenläufige Programme.
Ausführungszeit-sensitive Abhängigkeiten sind dort selbst für
Ausführungszeit-insensitive Angreifermodelle relevant, da dort interne
Ausführungszeitkanäle zwischen unterschiedlichen Ausführungsfäden
extern beobachtbar sein können. Meine Implementierung für
nebenläufige Java Programme basiert auf auf dem Programmanalyse-
System JOANA.
Außerdem präsentiere ich neue Analysen für Ausführungszeitkanäle
aufgrund mikro-architektureller Abhängigkeiten. Exemplarisch untersuche
ich Implementierungen von AES256 Blockverschlüsselung. Bei einigen
Implementierungen führen Daten-Caches dazu, dass die Ausführungszeit
abhängt von Schlüssel und Geheimtext, wodurch diese
aus der Ausführungszeit inferierbar sind. Für andere Implementierungen
weist meine automatische statische Analyse (unter Annahme
einer einfachen konkreten Cache-Mikroarchitektur) die Abwesenheit
solcher Kanäle nach
Recommended from our members
FLAVERS: a Finite State Verification Technique for Software Systems
Software systems are increasing in size and complexity and, subsequently, are becoming ever more difficult to validate. Finite State Verification (FSV) has been gaining credibility and attention as an alternative to testing and to formal verification approaches based on theorem proving. There has recently been a great deal of excitement about the potential for FSV approaches to prove properties about hardware descriptions but, for the most part, these approaches do not scale adequately to handle the complexity usually found in software. In this paper, we describe an FSV approach that creates a compact and conservative, but imprecise, model of the system being analyzed, and then assists the analyst in adding additional details as guided by previous analysis results. This paper describes this approach and a prototype implementation, called FLAVERS, presents a detailed example, and then provides some experimental results demonstrating scalability
Digital television applications
Studying development of interactive services for digital television is a leading edge area of work as there is minimal research or precedent to guide their design. Published research is limited and therefore this thesis aims at establishing a set of computing methods using Java and XML technology for future set-top box interactive services. The main issues include middleware architecture, a Java user interface for digital television, content representation and return channel communications.
The middleware architecture used was made up of an Application Manager, Application Programming Interface (API), a Java Virtual Machine, etc., which were arranged in a layered model to ensure the interoperability. The application manager was designed to control the lifecycle of Xlets; manage set-top box resources and remote control keys and to adapt the graphical device environment. The architecture of both application manager and Xlet forms the basic framework for running multiple interactive services simultaneously in future set-top box designs.
User interface development is more complex for this type of platform (when compared to that for a desktop computer) as many constraints are set on the look and feel (e.g., TV-like and limited buttons). Various aspects of Java user interfaces were studied and my research in this area focused on creating a remote control event model and lightweight drawing components using the Java Abstract Window Toolkit (AWT) and Java Media Framework (JMF) together with Extensible Markup Language (XML).
Applications were designed aimed at studying the data structure and efficiency of the XML language to define interactive content. Content parsing was designed as a lightweight software module based around two parsers (i.e., SAX parsing and DOM parsing). The still content (i.e., text, images, and graphics) and dynamic content (i.e., hyperlinked text, animations, and forms) can then be modeled and processed efficiently.
This thesis also studies interactivity methods using Java APIs via a return channel. Various communication models are also discussed that meet the interactivity requirements for different interactive services. They include URL, Socket, Datagram, and SOAP models which applications can choose to use in order to establish a connection with the service or broadcaster in order to transfer data.
This thesis is presented in two parts: The first section gives a general summary of the research and acts as a complement to the second section, which contains a series of related publications.reviewe
Efficient optimization of memory accesses in parallel programs
The power, frequency, and memory wall problems have caused a major shift in mainstream computing by introducing processors that contain multiple low power cores. As multi-core processors are becoming ubiquitous, software trends in both parallel programming languages and dynamic compilation have added new challenges to program compilation for multi-core processors. This thesis proposes a combination of high-level and low-level compiler optimizations to address these challenges.
The high-level optimizations introduced in this thesis include new approaches to May-Happen-in-Parallel analysis and Side-Effect analysis for parallel programs and a novel parallelism-aware Scalar Replacement for Load Elimination transformation. A new Isolation Consistency (IC) memory model is described that permits several scalar replacement transformation opportunities compared to many existing memory models.
The low-level optimizations include a novel approach to register allocation that retains the compile time and space efficiency of Linear Scan, while delivering runtime performance superior to both Linear Scan and Graph Coloring. The allocation phase is modeled as an optimization problem on a Bipartite Liveness Graph (BLG) data structure. The assignment phase focuses on reducing the number of spill instructions by using register-to-register move and exchange instructions wherever possible.
Experimental evaluations of our scalar replacement for load elimination transformation in the Jikes RVM dynamic compiler show decreases in dynamic counts for getfield operations of up to 99.99%, and performance improvements of up to 1.76x on 1 core, and 1.39x on 16 cores, when compared with the load elimination algorithm available in Jikes RVM. A prototype implementation of our BLG register allocator in Jikes RVM demonstrates runtime performance improvements of up to 3.52x relative to Linear Scan on an x86 processor. When compared to Graph Coloring register allocator in the GCC compiler framework, our allocator resulted in an execution time improvement of up to 5.8%, with an average improvement of 2.3% on a POWER5 processor.
With the experimental evaluations combined with the foundations presented in this thesis, we believe that the proposed high-level and low-level optimizations are useful in addressing some of the new challenges emerging in the optimization of parallel programs for multi-core architectures
Static analysis of concurrrent and distributed systems: concurrent objects and Ethereum Bytecode
Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 23-01-2020Hoy en día la concurrencia y la distribución se han convertido en una parte fundamental del proceso de desarrollo de software. Indiscutiblemente, Internet y el uso cada vez más extendido de los procesadores multicore ha influido en el tipo de aplicaciones que se desarrollan. Esto ha dado lugar a la creación de distintos modelos de concurrencia .En particular, uno de los modelos de concurrencia que está ganando importancia es el modelo de objetos concurrentes basado en actores. En este modelo, los objetos (denominados actores) son las unidades de concurrencia. Cada objeto tiene su propio procesador y un estado local. La comunicación entre los mismos se lleva a cabo mediante el paso de mensajes. Cuando un objeto recibe un mensaje puede: actualizar su estado, mandar mensajes o crear nuevos objetos. Es bien sabido que la creación de programas concurrentes correctos es más compleja que la de programas secuenciales ya que es necesario tener en cuenta distintos aspectos inherentes a la concurrencia como los errores asociados a las carreras de datos o a los interbloqueos. Con el n de asegurar el correcto comportamiento de estos programas concurrentes se han desarrollado distintas técnicas de análisis estático y verificación para los diversos modelos de concurrencia existentes...Nowadays concurrency and distribution have become a fundamental part in the softwaredevelopment process. The Internet and the more extended use of multicore processorshave in uenced the type of the applications which are being developed. This has lead tothe creation of several concurrency models. In particular, a concurrency model that isgaining popularity is the actor model, the basis for concurrent objects. In this model,the objects (actors) are the concurrent units. Each object has its own processor and alocal state, and the communication between them is carried out using message passing.In response to receiving a message, an actor can update its local state, send messages orcreate new objects.Developing correct concurrent programs is known to be harder than writing sequentialones because of inherent aspects of concurrency such as data races or deadlocks. To ensurethe correct behavior of concurrent programs, static analyses and verication techniqueshave been developed for the diverse existent concurrency models...Fac. de InformáticaTRUEunpu
- …