36 research outputs found
FineIBT: Fine-grain Control-flow Enforcement with Indirect Branch Tracking
We present the design, implementation, and evaluation of FineIBT: a CFI
enforcement mechanism that improves the precision of hardware-assisted CFI
solutions, like Intel IBT and ARM BTI, by instrumenting program code to reduce
the valid/allowed targets of indirect forward-edge transfers. We study the
design of FineIBT on the x86-64 architecture, and implement and evaluate it on
Linux and the LLVM toolchain. We designed FineIBT's instrumentation to be
compact, and incur low runtime and memory overheads, and generic, so as to
support a plethora of different CFI policies. Our prototype implementation
incurs negligible runtime slowdowns (0%-1.94% in SPEC CPU2017 and
0%-1.92% in real-world applications) outperforming Clang-CFI. Lastly,
we investigate the effectiveness/security and compatibility of FineIBT using
the ConFIRM CFI benchmarking suite, demonstrating that our nimble
instrumentation provides complete coverage in the presence of modern software
features, while supporting a wide range of CFI policies (coarse- vs. fine- vs.
finer-grain) with the same, predictable performance
Penetrating Shields: A Systematic Analysis of Memory Corruption Mitigations in the Spectre Era
This paper provides the first systematic analysis of a synergistic threat
model encompassing memory corruption vulnerabilities and microarchitectural
side-channel vulnerabilities. We study speculative shield bypass attacks that
leverage speculative execution attacks to leak secrets that are critical to the
security of memory corruption mitigations (i.e., the shields), and then use the
leaked secrets to bypass the mitigation mechanisms and successfully conduct
memory corruption exploits, such as control-flow hijacking. We start by
systematizing a taxonomy of the state-of-the-art memory corruption mitigations
focusing on hardware-software co-design solutions. The taxonomy helps us to
identify 10 likely vulnerable defense schemes out of 20 schemes that we
analyze. Next, we develop a graph-based model to analyze the 10 likely
vulnerable defenses and reason about possible countermeasures. Finally, we
present three proof-of-concept attacks targeting an already-deployed mitigation
mechanism and two state-of-the-art academic proposals.Comment: 14 page
OSS architecture for mixed-criticality systems â a dual view from a software and system engineering perspective
Computer-based automation in industrial appliances led to a growing number of
logically dependent, but physically separated embedded control units per
appliance. Many of those components are safety-critical systems, and require
adherence to safety standards, which is inconsonant with the relentless demand
for features in those appliances. Features lead to a growing amount of control
units per appliance, and to a increasing complexity of the overall software
stack, being unfavourable for safety certifications. Modern CPUs provide means
to revise traditional separation of concerns design primitives: the consolidation
of systems, which yields new engineering challenges that concern the entire
software and system stack.
Multi-core CPUs favour economic consolidation of formerly separated
systems with one efficient single hardware unit. Nonetheless, the system
architecture must provide means to guarantee the freedom from interference
between domains of different criticality. System consolidation demands for
architectural and engineering strategies to fulfil requirements (e.g., real-time
or certifiability criteria) in safety-critical environments.
In parallel, there is an ongoing trend to substitute ordinary proprietary base
platform software components by mature OSS variants for economic and
engineering reasons. There are fundamental differences of processual properties
in development processes of OSS and proprietary software. OSS in
safety-critical systems requires development process assessment techniques to
build an evidence-based fundament for certification efforts that is based upon
empirical software engineering methods.
In this thesis, I will approach from both sides: the software and system
engineering perspective. In the first part of this thesis, I focus on the
assessment of OSS components: I develop software engineering techniques
that allow to quantify characteristics of distributed OSS development
processes. I show that ex-post analyses of software development processes can
be used to serve as a foundation for certification efforts, as it is required
for safety-critical systems.
In the second part of this thesis, I present a system architecture based on
OSS components that allows for consolidation of mixed-criticality systems
on a single platform. Therefore, I exploit virtualisation extensions of modern
CPUs to strictly isolate domains of different criticality. The proposed
architecture shall eradicate any remaining hypervisor activity in order to
preserve real-time capabilities of the hardware by design, while
guaranteeing strict isolation across domains.ComputergestĂŒtzte Automatisierung industrieller Systeme fĂŒhrt zu einer
wachsenden Anzahl an logisch abhÀngigen, aber physisch voneinander getrennten
SteuergerÀten pro System. Viele der EinzelgerÀte sind sicherheitskritische
Systeme, welche die Einhaltung von Sicherheitsstandards erfordern, was durch
die unermĂŒdliche Nachfrage an FunktionalitĂ€ten erschwert wird. Diese fĂŒhrt zu
einer wachsenden Gesamtzahl an SteuergerÀten, einhergehend mit wachsender
KomplexitÀt des gesamten Softwarekorpus, wodurch Zertifizierungsvorhaben
erschwert werden. Moderne Prozessoren stellen Mittel zur VerfĂŒgung, welche es
ermöglichen, das traditionelle >Trennung von Belangen< Designprinzip zu
erneuern: die Systemkonsolidierung. Sie stellt neue ingenieurstechnische
Herausforderungen, die den gesamten Software und Systemstapel betreffen.
Mehrkernprozessoren begĂŒnstigen die ökonomische und effiziente Konsolidierung
vormals getrennter Systemen zu einer effizienten Hardwareeinheit. Geeignete
Systemarchitekturen mĂŒssen jedoch die RĂŒckwirkungsfreiheit zwischen DomĂ€nen
unterschiedlicher KritikalitÀt sicherstellen. Die Konsolidierung erfordert
architektonische, als auch ingenieurstechnische Strategien um die Anforderungen
(etwa Echtzeit- oder Zertifizierbarkeitskriterien) in sicherheitskritischen
Umgebungen erfĂŒllen zu können.
Zunehmend werden herkömmliche proprietÀr entwickelte Basisplattformkomponenten
aus ökonomischen und technischen GrĂŒnden vermehrt durch ausgereifte OSS
Alternativen ersetzt. Jedoch hindern fundamentale Unterschiede bei prozessualen
Eigenschaften des Entwicklungsprozesses bei OSS den Einsatz in
sicherheitskritischen Systemen. Dieser erfordert Techniken, welche es erlauben
die Entwicklungsprozesse zu bewerten um ein evidenzbasiertes Fundament fĂŒr
Zertifizierungsvorhaben basierend auf empirischen Methoden des Software
Engineerings zur VerfĂŒgung zu stellen.
In dieser Arbeit nÀhere ich mich von beiden Seiten: der Softwaretechnik, und
der Systemarchitektur. Im ersten Teil befasse ich mich mit der Beurteilung von
OSS Komponenten: Ich entwickle Softwareanalysetechniken, welche es
ermöglichen, prozessuale Charakteristika von verteilten OSS
Entwicklungsvorhaben zu quantifizieren. Ich zeige, dass rĂŒckschauende Analysen
des Entwicklungsprozess als Grundlage fĂŒr Softwarezertifizierungsvorhaben
genutzt werden können.
Im zweiten Teil dieser Arbeit widme ich mich der Systemarchitektur. Ich stelle
eine OSS-basierte Systemarchitektur vor, welche die Konsolidierung von
Systemen gemischter KritikalitÀt auf einer alleinstehenden Plattform
ermöglicht. Dazu nutze ich Virtualisierungserweiterungen moderner Prozessoren
aus, um die Hardware in strikt voneinander isolierten RechendomÀnen unterschiedlicher
KritikalitÀt unterteilen zu können. Die vorgeschlagene Architektur soll jegliche
Betriebsstörungen des Hypervisors beseitigen, um die EchtzeitfÀhigkeiten der
Hardware bauartbedingt aufrecht zu erhalten, wÀhrend strikte Isolierung
zwischen DomÀnen stets sicher gestellt ist
Identifying Code Injection and Reuse Payloads In Memory Error Exploits
Today's most widely exploited applications are the web browsers and document readers we use every day. The immediate goal of these attacks is to compromise target systems by executing a snippet of malicious code in the context of the exploited application. Technical tactics used to achieve this can be classified as either code injection - wherein malicious instructions are directly injected into the vulnerable program - or code reuse, where bits of existing program code are pieced together to form malicious logic. In this thesis, I present a new code reuse strategy that bypasses existing and up-and-coming mitigations, and two methods for detecting attacks by identifying the presence of code injection or reuse payloads. Fine-grained address space layout randomization efficiently scrambles program code, limiting one's ability to predict the location of useful instructions to construct a code reuse payload. To expose the inadequacy of this exploit mitigation, a technique for "just-in-time" exploitation is developed. This new technique maps memory on-the-fly and compiles a code reuse payload at runtime to ensure it works in a randomized application. The attack also works in face of all other widely deployed mitigations, as demonstrated with a proof-of-concept attack against Internet Explorer 10 in Windows 8. This motivates the need for detection of such exploits rather than solely relying on prevention. Two new techniques are presented for detecting attacks by identifying the presence of a payload. Code reuse payloads are identified by first taking a memory snapshot of the target application, then statically profiling the memory for chains of code pointers that reuse code to implement malicious logic. Code injection payloads are identified with runtime heuristics by leveraging hardware virtualization for efficient sandboxed execution of all buffers in memory. Employing both detection methods together to scan program memory takes about a second and produces negligible false positives and false negatives provided that the given exploit is functional and triggered in the target application version. Compared to other strategies, such as the use of signatures, this approach requires relatively little effort spent on maintenance over time and is capable of detecting never before seen attacks. Moving forward, one could use these contributions to form the basis of a unique and effective network intrusion detection system (NIDS) to augment existing systems.Doctor of Philosoph
Enabling Usable and Performant Trusted Execution
A plethora of major security incidents---in which personal identifiers belonging to hundreds of millions of users were stolen---demonstrate the importance of improving the security of cloud systems. To increase security in the cloud environment, where resource sharing is the norm, we need to rethink existing approaches from the ground-up. This thesis analyzes the feasibility and security of trusted execution technologies as the cornerstone of secure software systems, to better protect users' data and privacy.
Trusted Execution Environments (TEE), such as Intel SGX, has the potential to minimize the Trusted Computing Base (TCB), but they also introduce many challenges for adoption. Among these challenges are TEE's significant impact on applications' performance and non-trivial effort required to migrate legacy systems to run on these secure execution technologies. Other challenges include managing a trustworthy state across a distributed system and ensuring these individual machines are resilient to micro-architectural attacks.
In this thesis, I first characterize the performance bottlenecks imposed by SGX and suggest optimization strategies. I then address two main adoption challenges for existing applications: managing permissions across a distributed system and scaling the SGX's mechanism for proving authenticity and integrity.
I then analyze the resilience of trusted execution technologies to speculative execution, micro-architectural attacks, which put cloud infrastructure at risk. This analysis revealed a devastating security flaw in Intel's processors which is known as Foreshadow/L1TF. Finally, I propose a new architectural design for out-of-order processors which defeats all known speculative execution attacks.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155139/1/oweisse_1.pd
Software-Based Techniques for Protecting Return Addresses
Protecting computing systems against cyberattacks should be put high on the
agenda. For example, Colonial Pipeline, an American oil pipeline system, suffered
a cyberattack that impacted its computerized equipment managing the pipeline,
leading to a state of emergency declared by President Joe Biden in May, 2021.
As reported by Microsoft Security Response Center, attackers are unanimously
corrupting the stack and most Control Flow Guard (CFG) improvements will provide
little value-add until stack protection loads. Shadow stacks play an important
role in protecting backward edges (return addresses on the call stack) to mitigate
Return-Oriented Programming (ROP) attacks. Control-Flow Integrity (CFI) techniques
often focus on protecting forward edges (indirect calls via function pointers
and virtual calls) and assume that backward edges are protected by shadow stacks.
However, the cruel reality is that shadow stacks are still not widely deployed due
to compatibility, performance or security deficiencies. In this thesis, we propose
three novel techniques for protecting return addresses.
First, by adding one level of indirection, we introduce BarRA, the first shadow
stack mechanism that applies continuous runtime re-randomization to abstract return
addresses for protecting their corresponding concrete return addresses (also
protected by CFI) for single-threaded programs, thus avoiding expensive pointer
tracking. As a nice side-effect, BarRA naturally combines the shadow stack, CFI
and runtime re-randomization in the same framework.
Second, without reserving any dedicated register, we propose a novel threadlocal
storage mechanism, STK-TLS, that is both efficient and free of compatibility
issues. We also present a new microsecond-level runtime re-randomization technique
(without relying on information hiding or MMU), STK-MSR, to mitigate
information disclosure attacks and protect the shadow stack with 64-bit entropy.
Based on STK-TLS and STK-MSR, we have implemented a novel stack layout
(referred to as Bustk), that is highly performant, compatible with existing code,
and provides meaningful security for single- and multi-threaded server programs.
Third, by fast-moving safe regions in the large 47-bit user space (based on
MMU), we design a practical shadow stack, FlashStack, for protecting return
addresses in single- and multi-threaded programs (including browsers) running under
64-bit Linux on x86-64. FlashStack introduces a novel lightweight instrumentation
mechanism, a continuous shuffling scheme for the shadow stack in user
space, and a new dual-prologue approach for a protected function to mitigate the
TOCTTOU attacks (constructed by Microsoft s red team), information disclosure
attacks, and crash-resistant probing attacks
Crude oil theft, petrol-piracy and illegal trade in fuel:an enterprise-value chain perspective of energy-maritime crime in the Gulf of Guinea
The Gulf of Guinea (GoG) has developed into a global energy-maritime crime hotspot, with Nigeria being the epicentre of illegal oil-related maritime activities in the region. For several decades, scholars have sought to justify crude oil theft, petro-piracy and illegal fuel trade especially in the waters of Nigeria, in the context of greed-grievance. While that approach provides a basis for understanding the realities of illegal energy-maritime activities in the Niger Delta region of Nigeria, it does little to explain how the illicit activities have evolved into a global enterprise it is today, the dynamics of the business and the infrastructure that sustain the criminality. Against the backdrop of this limitation in existing theoretical underpinning of illegal energy-maritime activities in the GoG, this study adopts an enterprise-value chain model which, moving beyond the greed-grievance narrative, emphasises the primacy of both the enterprise and the marketplace (not players in the market) in explaining, and understanding the dynamics, complexities and persistence of crude oil theft, petro-piracy and illegal fuel trade in the GoG. The enterprise-value chain approach as adopted in the study, offers an advantage of interdisciplinary perspective, combining Smithâs enterprise theory of crime and Porterâs business management concept of value chain to understanding energy-maritime criminality in the GoG. The enterprise-value chain model sees the tripod of crude oil theft, petro-piracy and illegal trade in fuel as an organised crime; a well-structured economic activity whose business philosophy hinges on the provision of illegal goods and services. Such activities exist because the legitimate marketplace has limited capacity to meet the needs of potential customers. Within the enterprise-value chain framework, the study identifies, and analyses the dynamics of overlap, cooperation and conflict among the different players in the illegal energy-maritime industry as well as mutually beneficial relationships between formal and informal energy-maritime economies. Such an overlap is critical to understanding both the nature of the business and its sustaining value chain. The study concludes that current energy-maritime security architecture in the Gulf of Guinea does not capture the organised, enterprise nature of illicit offshore and onshore activities and its sustaining value chain, which highlights its inherent limitation viz-a-viz the regionâs quest for energy-maritime security. There is therefore an urgent need to address this seeming gap as it determines significantly how the phenomenon is considered both for academic purposes and public policy. It is this obvious gap in both academic literature and policy on maritime security in the GoG that this study intends to fill. The study, in the context of its theoretical framework, develops a business approach to enhancing energy-maritime security in the GoG