25,413 research outputs found
Experimental Performance Evaluation of Cloud-Based Analytics-as-a-Service
An increasing number of Analytics-as-a-Service solutions has recently seen
the light, in the landscape of cloud-based services. These services allow
flexible composition of compute and storage components, that create powerful
data ingestion and processing pipelines. This work is a first attempt at an
experimental evaluation of analytic application performance executed using a
wide range of storage service configurations. We present an intuitive notion of
data locality, that we use as a proxy to rank different service compositions in
terms of expected performance. Through an empirical analysis, we dissect the
performance achieved by analytic workloads and unveil problems due to the
impedance mismatch that arise in some configurations. Our work paves the way to
a better understanding of modern cloud-based analytic services and their
performance, both for its end-users and their providers.Comment: Longer version of the paper in Submission at IEEE CLOUD'1
Boosting Performance of Data-intensive Analysis Workflows with Distributed Coordinated Caching
Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex.
Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall processing efficiency of data-intensive workows and efficiently use limited cache volume by reducing replication of data on distributed caches.
We developed a NaviX coordination service at KIT that realizes coordinated distributed caching using XRootD cache proxy server infrastructure and HTCondor batch system. In this paper, we present the experience gained in operating coordinated distributed caches on cloud and HPC resources. Furthermore, we show benchmarks of a dedicated high throughput cluster, the Throughput-Optimized Analysis-System (TOpAS), which is based on the above-mentioned concept
Storage Solutions for Big Data Systems: A Qualitative Study and Comparison
Big data systems development is full of challenges in view of the variety of
application areas and domains that this technology promises to serve.
Typically, fundamental design decisions involved in big data systems design
include choosing appropriate storage and computing infrastructures. In this age
of heterogeneous systems that integrate different technologies for optimized
solution to a specific real world problem, big data system are not an exception
to any such rule. As far as the storage aspect of any big data system is
concerned, the primary facet in this regard is a storage infrastructure and
NoSQL seems to be the right technology that fulfills its requirements. However,
every big data application has variable data characteristics and thus, the
corresponding data fits into a different data model. This paper presents
feature and use case analysis and comparison of the four main data models
namely document oriented, key value, graph and wide column. Moreover, a feature
analysis of 80 NoSQL solutions has been provided, elaborating on the criteria
and points that a developer must consider while making a possible choice.
Typically, big data storage needs to communicate with the execution engine and
other processing and visualization technologies to create a comprehensive
solution. This brings forth second facet of big data storage, big data file
formats, into picture. The second half of the research paper compares the
advantages, shortcomings and possible use cases of available big data file
formats for Hadoop, which is the foundation for most big data computing
technologies. Decentralized storage and blockchain are seen as the next
generation of big data storage and its challenges and future prospects have
also been discussed
Data locality in Hadoop
Current market tendencies show the need of storing and processing rapidly
growing amounts of data. Therefore, it implies the demand for distributed
storage and data processing systems. The Apache Hadoop is an open-source
framework for managing such computing clusters in an effective, fault-tolerant
way.
Dealing with large volumes of data, Hadoop, and its storage system HDFS
(Hadoop Distributed File System), face challenges to keep the high efficiency
with computing in a reasonable time. The typical Hadoop implementation
transfers computation to the data, rather than shipping data across the cluster.
Otherwise, moving the big quantities of data through the network could significantly
delay data processing tasks. However, while a task is already running,
Hadoop favours local data access and chooses blocks from the nearest nodes.
Next, the necessary blocks are moved just when they are needed in the given
ask.
For supporting the Hadoop’s data locality preferences, in this thesis, we propose
adding an innovative functionality to its distributed file system (HDFS), that
enables moving data blocks on request. In-advance shipping of data makes it
possible to forcedly redistribute data between nodes in order to easily adapt it to
the given processing tasks. New functionality enables the instructed movement
of data blocks within the cluster. Data can be shifted either by user running
the proper HDFS shell command or programmatically by other module like an
appropriate scheduler.
In order to develop such functionality, the detailed analysis of Apache Hadoop
source code and its components (specifically HDFS) was conducted. Research
resulted in a deep understanding of internal architecture, what made it possible
to compare the possible approaches to achieve the desired solution, and develop
the chosen one
- …