1,086 research outputs found
Theory and Practice of Transactional Method Caching
Nowadays, tiered architectures are widely accepted for constructing large
scale information systems. In this context application servers often form the
bottleneck for a system's efficiency. An application server exposes an object
oriented interface consisting of set of methods which are accessed by
potentially remote clients. The idea of method caching is to store results of
read-only method invocations with respect to the application server's interface
on the client side. If the client invokes the same method with the same
arguments again, the corresponding result can be taken from the cache without
contacting the server. It has been shown that this approach can considerably
improve a real world system's efficiency.
This paper extends the concept of method caching by addressing the case where
clients wrap related method invocations in ACID transactions. Demarcating
sequences of method calls in this way is supported by many important
application server standards. In this context the paper presents an
architecture, a theory and an efficient protocol for maintaining full
transactional consistency and in particular serializability when using a method
cache on the client side. In order to create a protocol for scheduling cached
method results, the paper extends a classical transaction formalism. Based on
this extension, a recovery protocol and an optimistic serializability protocol
are derived. The latter one differs from traditional transactional cache
protocols in many essential ways. An efficiency experiment validates the
approach: Using the cache a system's performance and scalability are
considerably improved
Systems for Challenged Network Environments.
Developing regions face significant challenges in network access, making even simple network tasks unpleasant and rich media prohibitively difficult to access. Even as cellular network coverage is approaching a near-universal reach, good network connectivity remains scarce and expensive in many emerging markets. The underlying theme in this dissertation is designing network systems that better accommodate users in emerging markets. To do so, this dissertation begins with a nuanced analysis of content access behavior for web users in developing regions. This analysis finds the personalization of content access---and the fragmentation that results from it---to be significant factors in undermining many existing web acceleration mechanisms. The dissertation explores content access behavior from logs collected at shared internet access sites, as well as user activity information obtained from a commercial social networking service with over a hundred million members worldwide.
Based on these observations, the dissertation then discusses two systems designed for improving end-user experience in accessing and using content in constrained networks. First, it deals with the challenge of distributing private content in these networks. By leveraging the wide availability of cellular telephones, the dissertation describes a system for personal content distribution based on user access behavior. The system enables users to request future data accesses, and it schedules content transfers according to current and expected capacity. Second, the dissertation looks at routing bulk data in challenged networks, and describes an experimentation platform for building systems for challenged networks. This platform enables researchers to quickly prototype systems for challenged networks, and iteratively evaluate these systems using mobility and network emulation. The dissertation describes a few data routing systems that were built atop this experimentation platform.
Finally, the dissertation discusses the marketplace and service discovery considerations that are important in making these systems viable for developing-region use. In particular, it presents an extensible, auction-based market platform that relies on widely available communication tools for conveniently discovering and trading digital services and goods in developing regions. Collectively, this dissertation brings together several projects that aim to understand and improve end-user experience in challenged networks endemic to developing regions.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91401/1/azarias_1.pd
Method-based caching in multi-tiered server applications
Abstract
In recent years, application server technology has become very
popular for building complex but mission-critical systems such
as Web-based E-Commerce applications. However, the resulting
solutions tend to suffer from serious performance and
scalability bottlenecks, because of their distributed nature and
their various software layers. This paper deals with the problem
by presenting an approach about transparently caching results of
a service interface\u27s read-only methods on the client side.
Cache consistency is provided by a descriptive cache
invalidation model which may be specified by an application
programmer. As the cache layer is transparent to the server as
well as to the client code, it can be integrated with relatively
low effort even in systems that have already been implemented.
Experimental results show that the approach is very effective in
improving a server\u27s response times and its transactional
throughput.
Roughly speaking, the overhead for cache maintenance is small
when compared to the cost for method invocations on the server
side. The cache\u27s performance improvements are dominated by the
fraction of read method invocations and the cache hit rate. Our
experiments are based on a realistic E-commerce Web site
scenario and site user behaviour is emulated in an authentic
way. By inserting our cache, the maximum user request throughput
of the web application could be more than doubled while its
response time (such as perceived by a web client) was kept at
a very low level.
Moreover, the cache can be smoothly integrated with traditional
caching strategies acting on other system tiers (e.g. caching of
dynamic Web pages on a Web server). The presented approach as
well as the related implementation are not restricted to
application server scenarios but may be applied to any kind of
interface-based software layers
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Distributed Hybrid Simulation of the Internet of Things and Smart Territories
This paper deals with the use of hybrid simulation to build and compose
heterogeneous simulation scenarios that can be proficiently exploited to model
and represent the Internet of Things (IoT). Hybrid simulation is a methodology
that combines multiple modalities of modeling/simulation. Complex scenarios are
decomposed into simpler ones, each one being simulated through a specific
simulation strategy. All these simulation building blocks are then synchronized
and coordinated. This simulation methodology is an ideal one to represent IoT
setups, which are usually very demanding, due to the heterogeneity of possible
scenarios arising from the massive deployment of an enormous amount of sensors
and devices. We present a use case concerned with the distributed simulation of
smart territories, a novel view of decentralized geographical spaces that,
thanks to the use of IoT, builds ICT services to manage resources in a way that
is sustainable and not harmful to the environment. Three different simulation
models are combined together, namely, an adaptive agent-based parallel and
distributed simulator, an OMNeT++ based discrete event simulator and a
script-language simulator based on MATLAB. Results from a performance analysis
confirm the viability of using hybrid simulation to model complex IoT
scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487
A Modern Primer on Processing in Memory
Modern computing systems are overwhelmingly designed to move data to
computation. This design choice goes directly against at least three key trends
in computing that cause performance, scalability and energy bottlenecks: (1)
data access is a key bottleneck as many important applications are increasingly
data-intensive, and memory bandwidth and energy do not scale well, (2) energy
consumption is a key limiter in almost all computing platforms, especially
server and mobile systems, (3) data movement, especially off-chip to on-chip,
is very expensive in terms of bandwidth, energy and latency, much more so than
computation. These trends are especially severely-felt in the data-intensive
server and energy-constrained mobile systems of today. At the same time,
conventional memory technology is facing many technology scaling challenges in
terms of reliability, energy, and performance. As a result, memory system
architects are open to organizing memory in different ways and making it more
intelligent, at the expense of higher cost. The emergence of 3D-stacked memory
plus logic, the adoption of error correcting codes inside the latest DRAM
chips, proliferation of different main memory standards and chips, specialized
for different purposes (e.g., graphics, low-power, high bandwidth, low
latency), and the necessity of designing new solutions to serious reliability
and security issues, such as the RowHammer phenomenon, are an evidence of this
trend. This chapter discusses recent research that aims to practically enable
computation close to data, an approach we call processing-in-memory (PIM). PIM
places computation mechanisms in or near where the data is stored (i.e., inside
the memory chips, in the logic layer of 3D-stacked memory, or in the memory
controllers), so that data movement between the computation units and memory is
reduced or eliminated.Comment: arXiv admin note: substantial text overlap with arXiv:1903.0398
- …