29,209 research outputs found
DKVF: A Framework for Rapid Prototyping and Evaluating Distributed Key-value Stores
We present our framework DKVF that enables one to quickly prototype and
evaluate new protocols for key-value stores and compare them with existing
protocols based on selected benchmarks. Due to limitations of CAP theorem, new
protocols must be developed that achieve the desired trade-off between
consistency and availability for the given application at hand. Hence, both
academic and industrial communities focus on developing new protocols that
identify a different (and hopefully better in one or more aspect) point on this
trade-off curve. While these protocols are often based on a simple intuition,
evaluating them to ensure that they indeed provide increased availability,
consistency, or performance is a tedious task. Our framework, DKVF, enables one
to quickly prototype a new protocol as well as identify how it performs
compared to existing protocols for pre-specified benchmarks. Our framework
relies on YCSB (Yahoo! Cloud Servicing Benchmark) for benchmarking. We
demonstrate DKVF by implementing four existing protocols --eventual
consistency, COPS, GentleRain and CausalSpartan-- with it. We compare the
performance of these protocols against different loading conditions. We find
that the performance is similar to our implementation of these protocols from
scratch. And, the comparison of these protocols is consistent with what has
been reported in the literature. Moreover, implementation of these protocols
was much more natural as we only needed to translate the pseudocode into Java
(and add the necessary error handling). Hence, it was possible to achieve this
in just 1-2 days per protocol. Finally, our framework is extensible. It is
possible to replace individual components in the framework (e.g., the storage
component)
Measuring and Understanding Throughput of Network Topologies
High throughput is of particular interest in data center and HPC networks.
Although myriad network topologies have been proposed, a broad head-to-head
comparison across topologies and across traffic patterns is absent, and the
right way to compare worst-case throughput performance is a subtle problem.
In this paper, we develop a framework to benchmark the throughput of network
topologies, using a two-pronged approach. First, we study performance on a
variety of synthetic and experimentally-measured traffic matrices (TMs).
Second, we show how to measure worst-case throughput by generating a
near-worst-case TM for any given topology. We apply the framework to study the
performance of these TMs in a wide range of network topologies, revealing
insights into the performance of topologies with scaling, robustness of
performance across TMs, and the effect of scattered workload placement. Our
evaluation code is freely available
An Experiment on Bare-Metal BigData Provisioning
Many BigData customers use on-demand platforms in the cloud, where they can get a dedicated virtual cluster in a couple of minutes and pay only for the time they use. Increasingly, there is a demand for bare-metal bigdata solutions for applications that cannot tolerate the unpredictability and performance degradation of virtualized systems. Existing bare-metal solutions can introduce delays of 10s of minutes to provision a cluster by installing operating systems and applications on the local disks of servers. This has motivated recent research developing sophisticated mechanisms to optimize this installation. These approaches assume that using network mounted boot disks incur unacceptable run-time overhead. Our analysis suggest that while this assumption is true for application data, it is incorrect for operating systems and applications, and network mounting the boot disk and applications result in negligible run-time impact while leading to faster provisioning time.This research was supported in part by the MassTech
Collaborative Research Matching Grant Program, NSF
awards 1347525 and 1414119 and several commercial
partners of the Massachusetts Open Cloud who may be
found at http://www.massopencloud.or
Evaluating load balancing policies for performance and energy-efficiency
Nowadays, more and more increasingly hard computations are performed in
challenging fields like weather forecasting, oil and gas exploration, and
cryptanalysis. Many of such computations can be implemented using a computer
cluster with a large number of servers. Incoming computation requests are then,
via a so-called load balancing policy, distributed over the servers to ensure
optimal performance. Additionally, being able to switch-off some servers during
low period of workload, gives potential to reduced energy consumption.
Therefore, load balancing forms, albeit indirectly, a trade-off between
performance and energy consumption. In this paper, we introduce a syntax for
load-balancing policies to dynamically select a server for each request based
on relevant criteria, including the number of jobs queued in servers, power
states of servers, and transition delays between power states of servers. To
evaluate many policies, we implement two load balancers in: (i) iDSL, a
language and tool-chain for evaluating service-oriented systems, and (ii) a
simulation framework in AnyLogic. Both implementations are successfully
validated by comparison of the results.Comment: In Proceedings QAPL'16, arXiv:1610.0769
Internet of Things Cloud: Architecture and Implementation
The Internet of Things (IoT), which enables common objects to be intelligent
and interactive, is considered the next evolution of the Internet. Its
pervasiveness and abilities to collect and analyze data which can be converted
into information have motivated a plethora of IoT applications. For the
successful deployment and management of these applications, cloud computing
techniques are indispensable since they provide high computational capabilities
as well as large storage capacity. This paper aims at providing insights about
the architecture, implementation and performance of the IoT cloud. Several
potential application scenarios of IoT cloud are studied, and an architecture
is discussed regarding the functionality of each component. Moreover, the
implementation details of the IoT cloud are presented along with the services
that it offers. The main contributions of this paper lie in the combination of
the Hypertext Transfer Protocol (HTTP) and Message Queuing Telemetry Transport
(MQTT) servers to offer IoT services in the architecture of the IoT cloud with
various techniques to guarantee high performance. Finally, experimental results
are given in order to demonstrate the service capabilities of the IoT cloud
under certain conditions.Comment: 19pages, 4figures, IEEE Communications Magazin
- …