2,837 research outputs found
Practical Enclave Malware with Intel SGX
Modern CPU architectures offer strong isolation guarantees towards user
applications in the form of enclaves. For instance, Intel's threat model for
SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether
this threat model is realistic. In particular, it is unclear to what extent
enclave malware could harm a system. In this work, we practically demonstrate
the first enclave malware which fully and stealthily impersonates its host
application. Together with poorly-deployed application isolation on personal
computers, such malware can not only steal or encrypt documents for extortion,
but also act on the user's behalf, e.g., sending phishing emails or mounting
denial-of-service attacks. Our SGX-ROP attack uses new TSX-based
memory-disclosure primitive and a write-anything-anywhere primitive to
construct a code-reuse attack from within an enclave which is then
inadvertently executed by the host application. With SGX-ROP, we bypass ASLR,
stack canaries, and address sanitizer. We demonstrate that instead of
protecting users from harm, SGX currently poses a security threat, facilitating
so-called super-malware with ready-to-hit exploits. With our results, we seek
to demystify the enclave malware threat and lay solid ground for future
research on and defense against enclave malware
HardScope: Thwarting DOP with Hardware-assisted Run-time Scope Enforcement
Widespread use of memory unsafe programming languages (e.g., C and C++)
leaves many systems vulnerable to memory corruption attacks. A variety of
defenses have been proposed to mitigate attacks that exploit memory errors to
hijack the control flow of the code at run-time, e.g., (fine-grained)
randomization or Control Flow Integrity. However, recent work on data-oriented
programming (DOP) demonstrated highly expressive (Turing-complete) attacks,
even in the presence of these state-of-the-art defenses. Although multiple
real-world DOP attacks have been demonstrated, no efficient defenses are yet
available. We propose run-time scope enforcement (RSE), a novel approach
designed to efficiently mitigate all currently known DOP attacks by enforcing
compile-time memory safety constraints (e.g., variable visibility rules) at
run-time. We present HardScope, a proof-of-concept implementation of
hardware-assisted RSE for the new RISC-V open instruction set architecture. We
discuss our systematic empirical evaluation of HardScope which demonstrates
that it can mitigate all currently known DOP attacks, and has a real-world
performance overhead of 3.2% in embedded benchmarks
CFI CaRE: Hardware-supported Call and Return Enforcement for Commercial Microcontrollers
With the increasing scale of deployment of Internet of Things (IoT), concerns
about IoT security have become more urgent. In particular, memory corruption
attacks play a predominant role as they allow remote compromise of IoT devices.
Control-flow integrity (CFI) is a promising and generic defense technique
against these attacks. However, given the nature of IoT deployments, existing
protection mechanisms for traditional computing environments (including CFI)
need to be adapted to the IoT setting. In this paper, we describe the
challenges of enabling CFI on microcontroller (MCU) based IoT devices. We then
present CaRE, the first interrupt-aware CFI scheme for low-end MCUs. CaRE uses
a novel way of protecting the CFI metadata by leveraging TrustZone-M security
extensions introduced in the ARMv8-M architecture. Its binary instrumentation
approach preserves the memory layout of the target MCU software, allowing
pre-built bare-metal binary code to be protected by CaRE. We describe our
implementation on a Cortex-M Prototyping System and demonstrate that CaRE is
secure while imposing acceptable performance and memory impact.Comment: Author's version of paper to appear in the 20th International
Symposium on Research in Attacks, Intrusions and Defenses (RAID 2017
Technical Report: A Toolkit for Runtime Detection of Userspace Implants
This paper presents the Userspace Integrity Measurement Toolkit (USIM
Toolkit), a set of integrity measurement collection tools capable of detecting
advanced malware threats, such as memory-only implants, that evade many
traditional detection tools. Userspace integrity measurement validates that a
platform is free from subversion by validating that the current state of the
platform is consistent with a set of invariants. The invariants enforced by the
USIM Toolkit are carefully chosen based on the expected behavior of userspace,
and key behaviors of advanced malware. Userspace integrity measurement may be
combined with existing filesystem and kernel integrity measurement approaches
to provide stronger guarantees that a platform is executing the expected
software and that the software is in an expected state
A Survey of Techniques for Improving Security of GPUs
Graphics processing unit (GPU), although a powerful performance-booster, also
has many security vulnerabilities. Due to these, the GPU can act as a
safe-haven for stealthy malware and the weakest `link' in the security `chain'.
In this paper, we present a survey of techniques for analyzing and improving
GPU security. We classify the works on key attributes to highlight their
similarities and differences. More than informing users and researchers about
GPU security techniques, this survey aims to increase their awareness about GPU
security vulnerabilities and potential countermeasures
A Dynamic-Adversarial Mining Approach to the Security of Machine Learning
Operating in a dynamic real world environment requires a forward thinking and
adversarial aware design for classifiers, beyond fitting the model to the
training data. In such scenarios, it is necessary to make classifiers - a)
harder to evade, b) easier to detect changes in the data distribution over
time, and c) be able to retrain and recover from model degradation. While most
works in the security of machine learning has concentrated on the evasion
resistance (a) problem, there is little work in the areas of reacting to
attacks (b and c). Additionally, while streaming data research concentrates on
the ability to react to changes to the data distribution, they often take an
adversarial agnostic view of the security problem. This makes them vulnerable
to adversarial activity, which is aimed towards evading the concept drift
detection mechanism itself. In this paper, we analyze the security of machine
learning, from a dynamic and adversarial aware perspective. The existing
techniques of Restrictive one class classifier models, Complex learning models
and Randomization based ensembles, are shown to be myopic as they approach
security as a static task. These methodologies are ill suited for a dynamic
environment, as they leak excessive information to an adversary, who can
subsequently launch attacks which are indistinguishable from the benign data.
Based on empirical vulnerability analysis against a sophisticated adversary, a
novel feature importance hiding approach for classifier design, is proposed.
The proposed design ensures that future attacks on classifiers can be detected
and recovered from. The proposed work presents motivation, by serving as a
blueprint, for future work in the area of Dynamic-Adversarial mining, which
combines lessons learned from Streaming data mining, Adversarial learning and
Cybersecurity.Comment: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery.
201
Discovering New Vulnerabilities in Computer Systems
Vulnerability research plays a key role in preventing and defending against malicious computer system exploitations. Driven by a multi-billion dollar underground economy, cyber criminals today tirelessly launch malicious exploitations, threatening every aspect of daily computing. to effectively protect computer systems from devastation, it is imperative to discover and mitigate vulnerabilities before they fall into the offensive parties\u27 hands. This dissertation is dedicated to the research and discovery of new design and deployment vulnerabilities in three very different types of computer systems.;The first vulnerability is found in the automatic malicious binary (malware) detection system. Binary analysis, a central piece of technology for malware detection, are divided into two classes, static analysis and dynamic analysis. State-of-the-art detection systems employ both classes of analyses to complement each other\u27s strengths and weaknesses for improved detection results. However, we found that the commonly seen design patterns may suffer from evasion attacks. We demonstrate attacks on the vulnerabilities by designing and implementing a novel binary obfuscation technique.;The second vulnerability is located in the design of server system power management. Technological advancements have improved server system power efficiency and facilitated energy proportional computing. However, the change of power profile makes the power consumption subjected to unaudited influences of remote parties, leaving the server systems vulnerable to energy-targeted malicious exploit. We demonstrate an energy abusing attack on a standalone open Web server, measure the extent of the damage, and present a preliminary defense strategy.;The third vulnerability is discovered in the application of server virtualization technologies. Server virtualization greatly benefits today\u27s data centers and brings pervasive cloud computing a step closer to the general public. However, the practice of physical co-hosting virtual machines with different security privileges risks introducing covert channels that seriously threaten the information security in the cloud. We study the construction of high-bandwidth covert channels via the memory sub-system, and show a practical exploit of cross-virtual-machine covert channels on virtualized x86 platforms
On a Generic Security Game Model
To protect the systems exposed to the Internet against attacks, a security
system with the capability to engage with the attacker is needed. There have
been attempts to model the engagement/interactions between users, both benign
and malicious, and network administrators as games. Building on such works, we
present a game model which is generic enough to capture various modes of such
interactions. The model facilitates stochastic games with imperfect
information. The information is imperfect due to erroneous sensors leading to
incorrect perception of the current state by the players. To model this error
in perception distributed over other multiple states, we use Euclidean
distances between the outputs of the sensors. We build a 5-state game to
represent the interaction of the administrator with the user. The states
correspond to 1) the user being out of the system in the Internet, and after
logging in to the system; 2) having low privileges; 3) having high privileges;
4) when he successfully attacks and 5) gets trapped in a honeypot by the
administrator. Each state has its own action set. We present the game with a
distinct perceived action set corresponding to each distinct information set of
these states. The model facilitates stochastic games with imperfect
information. The imperfect information is due to erroneous sensors leading to
incorrect perception of the current state by the players. To model this error
in perception distributed over the states, we use Euclidean distances between
outputs of the sensors. A numerical simulation of an example game is presented
to show the evaluation of rewards to the players and the preferred strategies.
We also present the conditions for formulating the strategies when dealing with
more than one attacker and making collaborations.Comment: 31 page
Detile: Fine-Grained Information Leak Detection in Script Engines
Memory disclosure attacks play an important role in the exploitation of
memory corruption vulnerabilities. By analyzing recent research, we observe
that bypasses of defensive solutions that enforce control-flow integrity or
attempt to detect return-oriented programming require memory disclosure attacks
as a fundamental first step. However, research lags behind in detecting such
information leaks.
In this paper, we tackle this problem and present a system for fine-grained,
automated detection of memory disclosure attacks against scripting engines. The
basic insight is as follows: scripting languages, such as JavaScript in web
browsers, are strictly sandboxed. They must not provide any insights about the
memory layout in their contexts. In fact, any such information potentially
represents an ongoing memory disclosure attack. Hence, to detect information
leaks, our system creates a clone of the scripting engine process with a
re-randomized memory layout. The clone is instrumented to be synchronized with
the original process. Any inconsistency in the script contexts of both
processes appears when a memory disclosure was conducted to leak information
about the memory layout. Based on this detection approach, we have designed and
implemented Detile (\underline{det}ection of \underline{i}nformation
\underline{le}aks), a prototype for the JavaScript engine in Microsoft's
Internet Explorer 10/11 on Windows 8.0/8.1. An empirical evaluation shows that
our tool can successfully detect memory disclosure attacks even against this
proprietary software
- …