44 research outputs found

    On Resource Pooling and Separation for LRU Caching

    Full text link
    Caching systems using the Least Recently Used (LRU) principle have now become ubiquitous. A fundamental question for these systems is whether the cache space should be pooled together or divided to serve multiple flows of data item requests in order to minimize the miss probabilities. In this paper, we show that there is no straight yes or no answer to this question, depending on complex combinations of critical factors, including, e.g., request rates, overlapped data items across different request flows, data item popularities and their sizes. Specifically, we characterize the asymptotic miss probabilities for multiple competing request flows under resource pooling and separation for LRU caching when the cache size is large. Analytically, we show that it is asymptotically optimal to jointly serve multiple flows if their data item sizes and popularity distributions are similar and their arrival rates do not differ significantly; the self-organizing property of LRU caching automatically optimizes the resource allocation among them asymptotically. Otherwise, separating these flows could be better, e.g., when data sizes vary significantly. We also quantify critical points beyond which resource pooling is better than separation for each of the flows when the overlapped data items exceed certain levels. Technically, we generalize existing results on the asymptotic miss probability of LRU caching for a broad class of heavy-tailed distributions and extend them to multiple competing flows with varying data item sizes, which also validates the Che approximation under certain conditions. These results provide new insights on improving the performance of caching systems

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    CACHE MANAGEMENT SCHEMES FOR USER EQUIPMENT CONTEXTS IN 5TH GENERATION CLOUD RADIO ACCESS NETWORKS

    Get PDF
    Advances in cellular network technology continue to develop to address increasing demands from the growing number of devices resulting from the Internet of Things, or IoT. IoT has brought forth countless new equipment competing for service on cellular networks. The latest in cellular technology is 5th Generation Cloud Radio Access Networks, or 5G C-RAN, which consists of an architectural design created specifically to meet novel and necessary requirements for better performance, reduced latency of service, and scalability. As part of this design is the inclusion of a virtual cache, there is a necessity for useful cache management schemes and protocols, which ultimately will provide users better performance on the cellular network. This paper explores a few different cache management schemes, and analyzes their performance in comparison to each other. They include a probability based scoring scheme for cache elements; a hierarchical, or tiered, approach aimed at separating the cache into different levels or sections; and enhancements to previously existing approaches including reverse random marking as well as a scheme based on an exponential decay model. These schemes aim to offer better hit ratios, reduced latency of request service, preferential treatment based on users’ service levels and mobility, and a reduction in network traffic compared to other traditional and classic caching mechanisms

    A holistic architecture using peer to peer (P2P) protocols for the internet of things and wireless sensor networks

    Get PDF
    Wireless Sensor Networks (WSNs) interact with the physical world using sensing and/or actuation. The wireless capability of WSN nodes allows them to be deployed close to the sensed phenomenon. Cheaper processing power and the use of micro IP stacks allow nodes to form an “Internet of Things” (IoT) integrating the physical world with the Internet in a distributed system of devices and applications. Applications using the sensor data may be located across the Internet from the sensor network, allowing Cloud services and Big Data approaches to store and analyse this data in a scalable manner, supported by new approaches in the area of fog and edge computing. Furthermore, the use of protocols such as the Constrained Application Protocol (CoAP) and data models such as IPSO Smart Objects have supported the adoption of IoT in a range of scenarios. IoT has the potential to become a realisation of Mark Weiser’s vision of ubiquitous computing where tiny networked computers become woven into everyday life. This presents the challenge of being able to scale the technology down to resource-constrained devices and to scale it up to billions of devices. This will require seamless interoperability and abstractions that can support applications on Cloud services and also on node devices with constrained computing and memory capabilities, limited development environments and requirements on energy consumption. This thesis proposes a holistic architecture using concepts from tuple-spaces and overlay Peer-to-Peer (P2P) networks. This architecture is termed as holistic, because it considers the flow of the data from sensors through to services. The key contributions of this work are: development of a set of architectural abstractions to provide application layer interoperability, a novel cache algorithm supporting leases, a tuple-space based data store for local and remote data and a Peer to Peer (P2P) protocol with an innovative use of a DHT in building an overlay network. All these elements are designed for implementation on a resource constrained node and to be extensible to server environments, which is shown in a prototype implementation. This provides the basis for a new P2P holistic approach that will allow Wireless Sensor Networks and IoT to operate in a self-organising ad hoc manner in order to deliver the promise of IoT

    Doctor of Philosophy

    Get PDF
    dissertationComputer programs have complex interactions with their underlying hardware, exhibiting complex behaviors as a result. It is critical to understand these programs, as they serve an importantrole: researchers use them to express new ideas in computer science, while many others derive production value from them. In both cases, program understanding leads to mastery over these functions, adding value to human endeavors. Memory behavior is one of the hallmarks of general program behavior: it represents the critical function of retrieving data for the program to work on; it often reflects the overall actions taken by the program, providing a signature of program behavior; and it is often an important performance bottleneck, as the the memory subsystem is typically much slower than the processor. These reasons justify an investigation into the memory behavior of programs. A memory reference trace is a list of memory transactions performed by a program at runtime, a rich data source capturing the whole of a program's interaction with the memory subsystem, and a clear starting point for investigating program memory behavior. However, such a trace is extremely difficult to interpret by mere inspection, as it consists solely of many, many addresses and operation codes, without any more structure or context. This dissertation proposes to use visualization to construct images and animations of the data within a reference trace, thereby visually transmitting structures and events as encoded in the trace. These visualization approaches are designed with different focuses, meant to expose various aspects of the trace. For instance, the time dimension of the reference traces can be handled either with animation, showing events as they occur, or by laying time out in a spatial dimension, giving a view of the entire history of the trace at once. The approaches also vary in their level of abstraction from the hardware: some are concretely connected to representations of the memory itself, while others are more free-form, using more abstract metaphors to highlight general behaviors and patterns, which in turn characterize the program behavior. Each approach delivers its own set of insights, as demonstrated in this dissertation

    Flexible allocation and space management in storage systems

    Get PDF
    In this dissertation, we examine some of the challenges faced by the emerging networked storage systems. We focus on two main issues. Current file systems allocate storage statically at the time of their creation. This results in many suboptimal scenarios, for example: (a) space on the disk is not allocated well across multiple file systems, (b) data is not organized well for typical access patterns. We propose Virtual Allocation for flexible storage allocation. Virtual allocation separates storage allocation from the file system. It employs an allocate-on-write strategy, which lets applications fit into the actual usage of storage space without regard to the configured file system size. This improves flexibility by allowing storage space to be shared across different file systems. We present the design of virtual allocation and an evaluation of it through benchmarks based on a prototype system on Linux. Next, based on virtual allocation, we consider the problem of balancing locality and load in networked storage systems with multiple storage devices (or bricks). Data distribution affects locality and load balance across the devices in a networked storage system. We propose user-optimal data migration scheme which tries to balance locality and load balance in such networked storage systems. The presented approach automatically and transparently manages migration of data blocks among disks as data access patterns and loads change over time. We built a prototype system on Linux and present the design of user-optimal migration and an evaluation of it through realistic experiments

    Undermining User Privacy on Mobile Devices Using AI

    Full text link
    Over the past years, literature has shown that attacks exploiting the microarchitecture of modern processors pose a serious threat to the privacy of mobile phone users. This is because applications leave distinct footprints in the processor, which can be used by malware to infer user activities. In this work, we show that these inference attacks are considerably more practical when combined with advanced AI techniques. In particular, we focus on profiling the activity in the last-level cache (LLC) of ARM processors. We employ a simple Prime+Probe based monitoring technique to obtain cache traces, which we classify with Deep Learning methods including Convolutional Neural Networks. We demonstrate our approach on an off-the-shelf Android phone by launching a successful attack from an unprivileged, zeropermission App in well under a minute. The App thereby detects running applications with an accuracy of 98% and reveals opened websites and streaming videos by monitoring the LLC for at most 6 seconds. This is possible, since Deep Learning compensates measurement disturbances stemming from the inherently noisy LLC monitoring and unfavorable cache characteristics such as random line replacement policies. In summary, our results show that thanks to advanced AI techniques, inference attacks are becoming alarmingly easy to implement and execute in practice. This once more calls for countermeasures that confine microarchitectural leakage and protect mobile phone applications, especially those valuing the privacy of their users
    corecore