4,727 research outputs found
Method-based caching in multi-tiered server applications
Abstract
In recent years, application server technology has become very
popular for building complex but mission-critical systems such
as Web-based E-Commerce applications. However, the resulting
solutions tend to suffer from serious performance and
scalability bottlenecks, because of their distributed nature and
their various software layers. This paper deals with the problem
by presenting an approach about transparently caching results of
a service interface\u27s read-only methods on the client side.
Cache consistency is provided by a descriptive cache
invalidation model which may be specified by an application
programmer. As the cache layer is transparent to the server as
well as to the client code, it can be integrated with relatively
low effort even in systems that have already been implemented.
Experimental results show that the approach is very effective in
improving a server\u27s response times and its transactional
throughput.
Roughly speaking, the overhead for cache maintenance is small
when compared to the cost for method invocations on the server
side. The cache\u27s performance improvements are dominated by the
fraction of read method invocations and the cache hit rate. Our
experiments are based on a realistic E-commerce Web site
scenario and site user behaviour is emulated in an authentic
way. By inserting our cache, the maximum user request throughput
of the web application could be more than doubled while its
response time (such as perceived by a web client) was kept at
a very low level.
Moreover, the cache can be smoothly integrated with traditional
caching strategies acting on other system tiers (e.g. caching of
dynamic Web pages on a Web server). The presented approach as
well as the related implementation are not restricted to
application server scenarios but may be applied to any kind of
interface-based software layers
Theory and Practice of Transactional Method Caching
Nowadays, tiered architectures are widely accepted for constructing large
scale information systems. In this context application servers often form the
bottleneck for a system's efficiency. An application server exposes an object
oriented interface consisting of set of methods which are accessed by
potentially remote clients. The idea of method caching is to store results of
read-only method invocations with respect to the application server's interface
on the client side. If the client invokes the same method with the same
arguments again, the corresponding result can be taken from the cache without
contacting the server. It has been shown that this approach can considerably
improve a real world system's efficiency.
This paper extends the concept of method caching by addressing the case where
clients wrap related method invocations in ACID transactions. Demarcating
sequences of method calls in this way is supported by many important
application server standards. In this context the paper presents an
architecture, a theory and an efficient protocol for maintaining full
transactional consistency and in particular serializability when using a method
cache on the client side. In order to create a protocol for scheduling cached
method results, the paper extends a classical transaction formalism. Based on
this extension, a recovery protocol and an optimistic serializability protocol
are derived. The latter one differs from traditional transactional cache
protocols in many essential ways. An efficiency experiment validates the
approach: Using the cache a system's performance and scalability are
considerably improved
Web API Fragility: How Robust is Your Web API Client
Web APIs provide a systematic and extensible approach for
application-to-application interaction. A large number of mobile applications
makes use of web APIs to integrate services into apps. Each Web API's evolution
pace is determined by their respective developer and mobile application
developers are forced to accompany the API providers in their software
evolution tasks. In this paper we investigate whether mobile application
developers understand and how they deal with the added distress of web APIs
evolving. In particular, we studied how robust 48 high profile mobile
applications are when dealing with mutated web API responses. Additionally, we
interviewed three mobile application developers to better understand their
choices and trade-offs regarding web API integration.Comment: Technical repor
Poor Man's Content Centric Networking (with TCP)
A number of different architectures have been proposed in support of data-oriented or information-centric networking. Besides a similar visions, they share the need for designing a new networking architecture. We present an incrementally deployable approach to content-centric networking based upon TCP. Content-aware senders cooperate with probabilistically operating routers for scalable content delivery (to unmodified clients), effectively supporting opportunistic caching for time-shifted access as well as de-facto synchronous multicast delivery. Our approach is application protocol-independent and provides support beyond HTTP caching or managed CDNs. We present our protocol design along with a Linux-based implementation and some initial feasibility checks
Any Data, Any Time, Anywhere: Global Data Access for Science
Data access is key to science driven by distributed high-throughput computing
(DHTC), an essential technology for many major research projects such as High
Energy Physics (HEP) experiments. However, achieving efficient data access
becomes quite difficult when many independent storage sites are involved
because users are burdened with learning the intricacies of accessing each
system and keeping careful track of data location. We present an alternate
approach: the Any Data, Any Time, Anywhere infrastructure. Combining several
existing software products, AAA presents a global, unified view of storage
systems - a "data federation," a global filesystem for software delivery, and a
workflow management system. We present how one HEP experiment, the Compact Muon
Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance
metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium
on Big Data Computing (BDC) 201
Transparent and scalable client-side server selection using netlets
Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent
Sensor function virtualization to support distributed intelligence in the internet of things
It is estimated that-by 2020-billion devices will be connected to the Internet. This number not only includes TVs, PCs, tablets and smartphones, but also billions of embedded sensors that will make up the "Internet of Things" and enable a whole new range of intelligent services in domains such as manufacturing, health, smart homes, logistics, etc. To some extent, intelligence such as data processing or access control can be placed on the devices themselves. Alternatively, functionalities can be outsourced to the cloud. In reality, there is no single solution that fits all needs. Cooperation between devices, intermediate infrastructures (local networks, access networks, global networks) and/or cloud systems is needed in order to optimally support IoT communication and IoT applications. Through distributed intelligence the right communication and processing functionality will be available at the right place. The first part of this paper motivates the need for such distributed intelligence based on shortcomings in typical IoT systems. The second part focuses on the concept of sensor function virtualization, a potential enabler for distributed intelligence, and presents solutions on how to realize it
- …