642 research outputs found
A software approach to defeating side channels in last-level caches
We present a software approach to mitigate access-driven side-channel attacks
that leverage last-level caches (LLCs) shared across cores to leak information
between security domains (e.g., tenants in a cloud). Our approach dynamically
manages physical memory pages shared between security domains to disable
sharing of LLC lines, thus preventing "Flush-Reload" side channels via LLCs. It
also manages cacheability of memory pages to thwart cross-tenant "Prime-Probe"
attacks in LLCs. We have implemented our approach as a memory management
subsystem called CacheBar within the Linux kernel to intervene on such side
channels across container boundaries, as containers are a common method for
enforcing tenant isolation in Platform-as-a-Service (PaaS) clouds. Through
formal verification, principled analysis, and empirical evaluation, we show
that CacheBar achieves strong security with small performance overheads for
PaaS workloads
Undermining User Privacy on Mobile Devices Using AI
Over the past years, literature has shown that attacks exploiting the
microarchitecture of modern processors pose a serious threat to the privacy of
mobile phone users. This is because applications leave distinct footprints in
the processor, which can be used by malware to infer user activities. In this
work, we show that these inference attacks are considerably more practical when
combined with advanced AI techniques. In particular, we focus on profiling the
activity in the last-level cache (LLC) of ARM processors. We employ a simple
Prime+Probe based monitoring technique to obtain cache traces, which we
classify with Deep Learning methods including Convolutional Neural Networks. We
demonstrate our approach on an off-the-shelf Android phone by launching a
successful attack from an unprivileged, zeropermission App in well under a
minute. The App thereby detects running applications with an accuracy of 98%
and reveals opened websites and streaming videos by monitoring the LLC for at
most 6 seconds. This is possible, since Deep Learning compensates measurement
disturbances stemming from the inherently noisy LLC monitoring and unfavorable
cache characteristics such as random line replacement policies. In summary, our
results show that thanks to advanced AI techniques, inference attacks are
becoming alarmingly easy to implement and execute in practice. This once more
calls for countermeasures that confine microarchitectural leakage and protect
mobile phone applications, especially those valuing the privacy of their users
- …