4,248 research outputs found
Model-driven Scheduling for Distributed Stream Processing Systems
Distributed Stream Processing frameworks are being commonly used with the
evolution of Internet of Things(IoT). These frameworks are designed to adapt to
the dynamic input message rate by scaling in/out.Apache Storm, originally
developed by Twitter is a widely used stream processing engine while others
includes Flink, Spark streaming. For running the streaming applications
successfully there is need to know the optimal resource requirement, as
over-estimation of resources adds extra cost.So we need some strategy to come
up with the optimal resource requirement for a given streaming application. In
this article, we propose a model-driven approach for scheduling streaming
applications that effectively utilizes a priori knowledge of the applications
to provide predictable scheduling behavior. Specifically, we use application
performance models to offer reliable estimates of the resource allocation
required. Further, this intuition also drives resource mapping, and helps
narrow the estimated and actual dataflow performance and resource utilization.
Together, this model-driven scheduling approach gives a predictable application
performance and resource utilization behavior for executing a given DSPS
application at a target input stream rate on distributed resources.Comment: 54 page
Smart handoff technique for internet of vehicles communication using dynamic edge-backup node
© 2020 The Authors. Published by MDPI. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.3390/electronics9030524A vehicular adhoc network (VANET) recently emerged in the the Internet of Vehicles (IoV); it involves the computational processing of moving vehicles. Nowadays, IoV has turned into an interesting field of research as vehicles can be equipped with processors, sensors, and communication devices. IoV gives rise to handoff, which involves changing the connection points during the online communication session. This presents a major challenge for which many standardized solutions are recommended. Although there are various proposed techniques and methods to support seamless handover procedure in IoV, there are still some open research issues, such as unavoidable packet loss rate and latency. On the other hand, the emerged concept of edge mobile computing has gained crucial attention by researchers that could help in reducing computational complexities and decreasing communication delay. Hence, this paper specifically studies the handoff challenges in cluster based handoff using new concept of dynamic edge-backup node. The outcomes are evaluated and contrasted with the network mobility method, our proposed technique, and other cluster-based technologies. The results show that coherence in communication during the handoff method can be upgraded, enhanced, and improved utilizing the proposed technique.Published onlin
Dynamic re-optimization techniques for stream processing engines and object stores
Large scale data storage and processing systems are strongly motivated by the need to store and analyze massive datasets. The complexity of a large class of these systems is rooted in their distributed nature, extreme scale, need for real-time response, and streaming nature. The use of these systems on multi-tenant, cloud environments with potential resource interference necessitates fine-grained monitoring and control. In this dissertation, we present efficient, dynamic techniques for re-optimizing stream-processing systems and transactional object-storage systems.^ In the context of stream-processing systems, we present VAYU, a per-topology controller. VAYU uses novel methods and protocols for dynamic, network-aware tuple-routing in the dataflow. We show that the feedback-driven controller in VAYU helps achieve high pipeline throughput over long execution periods, as it dynamically detects and diagnoses any pipeline-bottlenecks. We present novel heuristics to optimize overlays for group communication operations in the streaming model.^ In the context of object-storage systems, we present M-Lock, a novel lock-localization service for distributed transaction protocols on scale-out object stores to increase transaction throughput. Lock localization refers to dynamic migration and partitioning of locks across nodes in the scale-out store to reduce cross-partition acquisition of locks. The service leverages the observed object-access patterns to achieve lock-clustering and deliver high performance. We also present TransMR, a framework that uses distributed, transactional object stores to orchestrate and execute asynchronous components in amorphous data-parallel applications on scale-out architectures
Effective Cache Apportioning for Performance Isolation Under Compiler Guidance
With a growing number of cores in modern high-performance servers, effective
sharing of the last level cache (LLC) is more critical than ever. The primary
agenda of such systems is to maximize performance by efficiently supporting
multi-tenancy of diverse workloads. However, this could be particularly
challenging to achieve in practice, because modern workloads exhibit dynamic
phase behaviour, which causes their cache requirements & sensitivities to vary
at finer granularities during execution. Unfortunately, existing systems are
oblivious to the application phase behavior, and are unable to detect and react
quickly enough to these rapidly changing cache requirements, often incurring
significant performance degradation. In this paper, we propose Com-CAS, a new
apportioning system that provides dynamic cache allocations for co-executing
applications. Com-CAS differs from the existing cache partitioning systems by
adapting to the dynamic cache requirements of applications just-in-time, as
opposed to reacting, without any hardware modifications. The front-end of
Com-CAS consists of compiler-analysis equipped with machine learning mechanisms
to predict cache requirements, while the back-end consists of proactive
scheduler that dynamically apportions LLC amongst co-executing applications
leveraging Intel Cache Allocation Technology (CAT). Com-CAS's partitioning
scheme utilizes the compiler-generated information across finer granularities
to predict the rapidly changing dynamic application behaviors, while
simultaneously maintaining data locality. Our experiments show that Com-CAS
improves average weighted throughput by 15% over unpartitioned cache system,
and outperforms state-of-the-art partitioning system KPart by 20%, while
maintaining the worst individual application completion time degradation to
meet various Service-Level Agreement (SLA) requirements
- …