78,464 research outputs found
How blockchain impacts cloud-based system performance: a case study for a groupware communication application
This paper examines the performance trade-off when implementing a blockchain architecture for a cloud-based groupware communication application. We measure the additional cloud-based resources and performance costs of the overhead required to implement a groupware collaboration system over a blockchain architecture. To evaluate our groupware application, we develop measuring instruments for testing scalability and performance of computer systems deployed as cloud computing applications. While some details of our groupware collaboration application have been published in earlier work, in this paper we reflect on a generalized measuring method for blockchain-enabled applications which may in turn lead to a general methodology for testing cloud-based system performance and scalability using blockchain. Response time and transaction throughput metrics are collected for the blockchain implementation against the non-blockchain implementation and some conclusions are drawn about the additional resources that a blockchain architecture for a groupware collaboration application impose
Feature selection in high-dimensional dataset using MapReduce
This paper describes a distributed MapReduce implementation of the minimum
Redundancy Maximum Relevance algorithm, a popular feature selection method in
bioinformatics and network inference problems. The proposed approach handles
both tall/narrow and wide/short datasets. We further provide an open source
implementation based on Hadoop/Spark, and illustrate its scalability on
datasets involving millions of observations or features
Modeling Scalability of Distributed Machine Learning
Present day machine learning is computationally intensive and processes large
amounts of data. It is implemented in a distributed fashion in order to address
these scalability issues. The work is parallelized across a number of computing
nodes. It is usually hard to estimate in advance how many nodes to use for a
particular workload. We propose a simple framework for estimating the
scalability of distributed machine learning algorithms. We measure the
scalability by means of the speedup an algorithm achieves with more nodes. We
propose time complexity models for gradient descent and graphical model
inference. We validate our models with experiments on deep learning training
and belief propagation. This framework was used to study the scalability of
machine learning algorithms in Apache Spark.Comment: 6 pages, 4 figures, appears at ICDE 201
A Configurable Transport Layer for CAF
The message-driven nature of actors lays a foundation for developing scalable
and distributed software. While the actor itself has been thoroughly modeled,
the message passing layer lacks a common definition. Properties and guarantees
of message exchange often shift with implementations and contexts. This adds
complexity to the development process, limits portability, and removes
transparency from distributed actor systems.
In this work, we examine actor communication, focusing on the implementation
and runtime costs of reliable and ordered delivery. Both guarantees are often
based on TCP for remote messaging, which mixes network transport with the
semantics of messaging. However, the choice of transport may follow different
constraints and is often governed by deployment. As a first step towards
re-architecting actor-to-actor communication, we decouple the messaging
guarantees from the transport protocol. We validate our approach by redesigning
the network stack of the C++ Actor Framework (CAF) so that it allows to combine
an arbitrary transport protocol with additional functions for remote messaging.
An evaluation quantifies the cost of composability and the impact of individual
layers on the entire stack
- …