431 research outputs found
Performance Characterization of In-Memory Data Analytics on a Modern Cloud Server
In last decade, data analytics have rapidly progressed from traditional
disk-based processing to modern in-memory processing. However, little effort
has been devoted at enhancing performance at micro-architecture level. This
paper characterizes the performance of in-memory data analytics using Apache
Spark framework. We use a single node NUMA machine and identify the bottlenecks
hampering the scalability of workloads. We also quantify the inefficiencies at
micro-architecture level for various data analysis workloads. Through empirical
evaluation, we show that spark workloads do not scale linearly beyond twelve
threads, due to work time inflation and thread level load imbalance. Further,
at the micro-architecture level, we observe memory bound latency to be the
major cause of work time inflation.Comment: Accepted to The 5th IEEE International Conference on Big Data and
Cloud Computing (BDCloud 2015
Characterizing Scalability of Sparse Matrix–Vector Multiplications on Phytium FT-2000+
Understanding the scalability of parallel programs is crucial for software optimization and hardware architecture design. As HPC hardware is moving towards many-core design, it becomes increasingly difficult for a parallel program to make effective use of all available processor cores. This makes scalability analysis increasingly important. This paper presents a quantitative study for characterizing the scalability of sparse matrix–vector multiplications (SpMV) on Phytium FT-2000+, an ARM-based HPC many-core architecture. We choose SpMV as it is a common operation in scientific and HPC applications. Due to the newness of ARM-based many-core architectures, there is little work on understanding the SpMV scalability on such hardware design. To close the gap, we carry out a large-scale empirical evaluation involved over 1000 representative SpMV datasets. We show that, while many computation-intensive SpMV applications contain extensive parallelism, achieving a linear speedup is non-trivial on Phytium FT-2000+. To better understand what software and hardware parameters are most important for determining the scalability of a given SpMV kernel, we develop a performance analytical model based on the regression tree. We show that our model is highly effective in characterizing SpMV scalability, offering useful insights to help application developers for better optimizing SpMV on an emerging HPC architecture
VIoLET: A Large-scale Virtual Environment for Internet of Things
IoT deployments have been growing manifold, encompassing sensors, networks,
edge, fog and cloud resources. Despite the intense interest from researchers
and practitioners, most do not have access to large-scale IoT testbeds for
validation. Simulation environments that allow analytical modeling are a poor
substitute for evaluating software platforms or application workloads in
realistic computing environments. Here, we propose VIoLET, a virtual
environment for defining and launching large-scale IoT deployments within cloud
VMs. It offers a declarative model to specify container-based compute resources
that match the performance of the native edge, fog and cloud devices using
Docker. These can be inter-connected by complex topologies on which
private/public networks, and bandwidth and latency rules are enforced. Users
can configure synthetic sensors for data generation on these devices as well.
We validate VIoLET for deployments with > 400 devices and > 1500 device-cores,
and show that the virtual IoT environment closely matches the expected compute
and network performance at modest costs. This fills an important gap between
IoT simulators and real deployments.Comment: To appear in the Proceedings of the 24TH International European
Conference On Parallel and Distributed Computing (EURO-PAR), August 27-31,
2018, Turin, Italy, europar2018.org. Selected as a Distinguished Paper for
presentation at the Plenary Session of the conferenc
Parallel Toolkit for Measuring the Quality of Network Community Structure
Many networks display community structure which identifies groups of nodes
within which connections are denser than between them. Detecting and
characterizing such community structure, which is known as community detection,
is one of the fundamental issues in the study of network systems. It has
received a considerable attention in the last years. Numerous techniques have
been developed for both efficient and effective community detection. Among
them, the most efficient algorithm is the label propagation algorithm whose
computational complexity is O(|E|). Although it is linear in the number of
edges, the running time is still too long for very large networks, creating the
need for parallel community detection. Also, computing community quality
metrics for community structure is computationally expensive both with and
without ground truth. However, to date we are not aware of any effort to
introduce parallelism for this problem. In this paper, we provide a parallel
toolkit to calculate the values of such metrics. We evaluate the parallel
algorithms on both distributed memory machine and shared memory machine. The
experimental results show that they yield a significant performance gain over
sequential execution in terms of total running time, speedup, and efficiency.Comment: 8 pages; in Network Intelligence Conference (ENIC), 2014 Europea
On-Device Deep Learning Inference for System-on-Chip (SoC) Architectures
As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems
- …