28,147 research outputs found

    A Community-based Cloud Computing Caching Service

    Get PDF
    Caching has become an important technology in the development of cloud computing-based high-performance web services. Caches reduce the request to response latency experienced by users, and reduce workload on backend databases. They need a high cache-hit rate to be fit for purpose, and this rate is dependent on the cache management policy used. Existing cache management policies are not designed to prevent cache pollution or cache monopoly problems, which impacts negatively on the cache-hit rate. This paper proposes a community-based caching approach (CC) to address these two problems. CC was evaluated for performance against thirteen commercially available cache management policies, and results demonstrate that the cache-hit rate achieved by CC was between 0.7% and 55% better than the alternate cache management policies

    Undermining User Privacy on Mobile Devices Using AI

    Full text link
    Over the past years, literature has shown that attacks exploiting the microarchitecture of modern processors pose a serious threat to the privacy of mobile phone users. This is because applications leave distinct footprints in the processor, which can be used by malware to infer user activities. In this work, we show that these inference attacks are considerably more practical when combined with advanced AI techniques. In particular, we focus on profiling the activity in the last-level cache (LLC) of ARM processors. We employ a simple Prime+Probe based monitoring technique to obtain cache traces, which we classify with Deep Learning methods including Convolutional Neural Networks. We demonstrate our approach on an off-the-shelf Android phone by launching a successful attack from an unprivileged, zeropermission App in well under a minute. The App thereby detects running applications with an accuracy of 98% and reveals opened websites and streaming videos by monitoring the LLC for at most 6 seconds. This is possible, since Deep Learning compensates measurement disturbances stemming from the inherently noisy LLC monitoring and unfavorable cache characteristics such as random line replacement policies. In summary, our results show that thanks to advanced AI techniques, inference attacks are becoming alarmingly easy to implement and execute in practice. This once more calls for countermeasures that confine microarchitectural leakage and protect mobile phone applications, especially those valuing the privacy of their users
    • …
    corecore