10,488 research outputs found
Using real options to select stable Middleware-induced software architectures
The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures
Optimistic Parallel State-Machine Replication
State-machine replication, a fundamental approach to fault tolerance,
requires replicas to execute commands deterministically, which usually results
in sequential execution of commands. Sequential execution limits performance
and underuses servers, which are increasingly parallel (i.e., multicore). To
narrow the gap between state-machine replication requirements and the
characteristics of modern servers, researchers have recently come up with
alternative execution models. This paper surveys existing approaches to
parallel state-machine replication and proposes a novel optimistic protocol
that inherits the scalable features of previous techniques. Using a replicated
B+-tree service, we demonstrate in the paper that our protocol outperforms the
most efficient techniques by a factor of 2.4 times
Has world poverty really fallen during the 1990s?
We evaluate the claim that world consumption poverty has fallen during the 1990s in light of alternative assumptions about the extent of initial poverty and the rate of subsequent poverty reduction in China, India, and the rest of the developing world. We assess the extent of poverty using two indicators: the aggregate poverty headcount and the poverty headcount ratio, and consider two international poverty lines that are widely used (2.15/day 1993 PPP). We find that under some of the assumptions considered, world poverty has risen. We conclude that, because of uncertainties in relation to the extent and trend of poverty in China, India, and the rest of the developing world, world poverty may or may not have increased. The extent of the increase or decrease in world poverty is critically dependent on the assumptions made. Our conclusions suggest the importance of improving the quality of global poverty statistics.world poverty, sensitivity analysis, China, India
Parallel Deferred Update Replication
Deferred update replication (DUR) is an established approach to implementing
highly efficient and available storage. While the throughput of read-only
transactions scales linearly with the number of deployed replicas in DUR, the
throughput of update transactions experiences limited improvements as replicas
are added. This paper presents Parallel Deferred Update Replication (P-DUR), a
variation of classical DUR that scales both read-only and update transactions
with the number of cores available in a replica. In addition to introducing the
new approach, we describe its full implementation and compare its performance
to classical DUR and to Berkeley DB, a well-known standalone database
MDCC: Multi-Data Center Consistency
Replicating data across multiple data centers not only allows moving the data
closer to the user and, thus, reduces latency for applications, but also
increases the availability in the event of a data center failure. Therefore, it
is not surprising that companies like Google, Yahoo, and Netflix already
replicate user data across geographically different regions.
However, replication across data centers is expensive. Inter-data center
network delays are in the hundreds of milliseconds and vary significantly.
Synchronous wide-area replication is therefore considered to be unfeasible with
strong consistency and current solutions either settle for asynchronous
replication which implies the risk of losing data in the event of failures,
restrict consistency to small partitions, or give up consistency entirely. With
MDCC (Multi-Data Center Consistency), we describe the first optimistic commit
protocol, that does not require a master or partitioning, and is strongly
consistent at a cost similar to eventually consistent protocols. MDCC can
commit transactions in a single round-trip across data centers in the normal
operational case. We further propose a new programming model which empowers the
application developer to handle longer and unpredictable latencies caused by
inter-data center communication. Our evaluation using the TPC-W benchmark with
MDCC deployed across 5 geographically diverse data centers shows that MDCC is
able to achieve throughput and latency similar to eventually consistent quorum
protocols and that MDCC is able to sustain a data center outage without a
significant impact on response times while guaranteeing strong consistency
- …