1,328 research outputs found
Undermining User Privacy on Mobile Devices Using AI
Over the past years, literature has shown that attacks exploiting the
microarchitecture of modern processors pose a serious threat to the privacy of
mobile phone users. This is because applications leave distinct footprints in
the processor, which can be used by malware to infer user activities. In this
work, we show that these inference attacks are considerably more practical when
combined with advanced AI techniques. In particular, we focus on profiling the
activity in the last-level cache (LLC) of ARM processors. We employ a simple
Prime+Probe based monitoring technique to obtain cache traces, which we
classify with Deep Learning methods including Convolutional Neural Networks. We
demonstrate our approach on an off-the-shelf Android phone by launching a
successful attack from an unprivileged, zeropermission App in well under a
minute. The App thereby detects running applications with an accuracy of 98%
and reveals opened websites and streaming videos by monitoring the LLC for at
most 6 seconds. This is possible, since Deep Learning compensates measurement
disturbances stemming from the inherently noisy LLC monitoring and unfavorable
cache characteristics such as random line replacement policies. In summary, our
results show that thanks to advanced AI techniques, inference attacks are
becoming alarmingly easy to implement and execute in practice. This once more
calls for countermeasures that confine microarchitectural leakage and protect
mobile phone applications, especially those valuing the privacy of their users
Impact of traffic mix on caching performance in a content-centric network
For a realistic traffic mix, we evaluate the hit rates attained in a
two-layer cache hierarchy designed to reduce Internet bandwidth requirements.
The model identifies four main types of content, web, file sharing, user
generated content and video on demand, distinguished in terms of their traffic
shares, their population and object sizes and their popularity distributions.
Results demonstrate that caching VoD in access routers offers a highly
favorable bandwidth memory tradeoff but that the other types of content would
likely be more efficiently handled in very large capacity storage devices in
the core. Evaluations are based on a simple approximation for LRU cache
performance that proves highly accurate in relevant configurations
Cache policies for cloud-based systems: To keep or not to keep
In this paper, we study cache policies for cloud-based caching. Cloud-based
caching uses cloud storage services such as Amazon S3 as a cache for data items
that would have been recomputed otherwise. Cloud-based caching departs from
classical caching: cloud resources are potentially infinite and only paid when
used, while classical caching relies on a fixed storage capacity and its main
monetary cost comes from the initial investment. To deal with this new context,
we design and evaluate a new caching policy that minimizes the overall cost of
a cloud-based system. The policy takes into account the frequency of
consumption of an item and the cloud cost model. We show that this policy is
easier to operate, that it scales with the demand and that it outperforms
classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014
(CLOUD 14
A versatile and accurate approximation for LRU cache performance
In a 2002 paper, Che and co-authors proposed a simple approach for estimating
the hit rates of a cache operating the least recently used (LRU) replacement
policy. The approximation proves remarkably accurate and is applicable to quite
general distributions of object popularity. This paper provides a mathematical
explanation for the success of the approximation, notably in configurations
where the intuitive arguments of Che, et al clearly do not apply. The
approximation is particularly useful in evaluating the performance of current
proposals for an information centric network where other approaches fail due to
the very large populations of cacheable objects to be taken into account and to
their complex popularity law, resulting from the mix of different content types
and the filtering effect induced by the lower layers in a cache hierarchy
- …