11,473 research outputs found

    An Infrastructure for the Dynamic Distribution of Web Applications and Services

    Full text link
    This paper presents the design and implementation of an infrastructure that enables any Web application, regardless of its current state, to be stopped and uninstalled from a particular server, transferred to a new server, then installed, loaded, and resumed, with all these events occurring "on the fly" and totally transparent to clients. Such functionalities allow entire applications to fluidly move from server to server, reducing the overhead required to administer the system, and increasing its performance in a number of ways: (1) Dynamic replication of new instances of applications to several servers to raise throughput for scalability purposes, (2) Moving applications to servers to achieve load balancing or other resource management goals, (3) Caching entire applications on servers located closer to clients.National Science Foundation (9986397

    Theory and Practice of Transactional Method Caching

    Get PDF
    Nowadays, tiered architectures are widely accepted for constructing large scale information systems. In this context application servers often form the bottleneck for a system's efficiency. An application server exposes an object oriented interface consisting of set of methods which are accessed by potentially remote clients. The idea of method caching is to store results of read-only method invocations with respect to the application server's interface on the client side. If the client invokes the same method with the same arguments again, the corresponding result can be taken from the cache without contacting the server. It has been shown that this approach can considerably improve a real world system's efficiency. This paper extends the concept of method caching by addressing the case where clients wrap related method invocations in ACID transactions. Demarcating sequences of method calls in this way is supported by many important application server standards. In this context the paper presents an architecture, a theory and an efficient protocol for maintaining full transactional consistency and in particular serializability when using a method cache on the client side. In order to create a protocol for scheduling cached method results, the paper extends a classical transaction formalism. Based on this extension, a recovery protocol and an optimistic serializability protocol are derived. The latter one differs from traditional transactional cache protocols in many essential ways. An efficiency experiment validates the approach: Using the cache a system's performance and scalability are considerably improved

    Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery

    Full text link
    In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a content-oblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs and do not need to be aware of the objects that are requested, as in classic caching. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges close to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularity. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 10\% from the global optimum in our evaluation

    Fog-enabled Edge Learning for Cognitive Content-Centric Networking in 5G

    Full text link
    By caching content at network edges close to the users, the content-centric networking (CCN) has been considered to enforce efficient content retrieval and distribution in the fifth generation (5G) networks. Due to the volume, velocity, and variety of data generated by various 5G users, an urgent and strategic issue is how to elevate the cognitive ability of the CCN to realize context-awareness, timely response, and traffic offloading for 5G applications. In this article, we envision that the fundamental work of designing a cognitive CCN (C-CCN) for the upcoming 5G is exploiting the fog computing to associatively learn and control the states of edge devices (such as phones, vehicles, and base stations) and in-network resources (computing, networking, and caching). Moreover, we propose a fog-enabled edge learning (FEL) framework for C-CCN in 5G, which can aggregate the idle computing resources of the neighbouring edge devices into virtual fogs to afford the heavy delay-sensitive learning tasks. By leveraging artificial intelligence (AI) to jointly processing sensed environmental data, dealing with the massive content statistics, and enforcing the mobility control at network edges, the FEL makes it possible for mobile users to cognitively share their data over the C-CCN in 5G. To validate the feasibility of proposed framework, we design two FEL-advanced cognitive services for C-CCN in 5G: 1) personalized network acceleration, 2) enhanced mobility management. Simultaneously, we present the simulations to show the FEL's efficiency on serving for the mobile users' delay-sensitive content retrieval and distribution in 5G.Comment: Submitted to IEEE Communications Magzine, under review, Feb. 09, 201
    • …
    corecore