79,828 research outputs found
An efficient resource sharing technique for multi-tenant databases
Multi-tenancy is one of the key components of cloud computing environment. Multi-tenant database system in SaaS (Software as a Service) has gained a lot of attention in academics, research and business arena. These database systems provide scalability and economic benefits for both cloud service providers and customers(organizations/companies referred as tenants) by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model embracing a query classification and worker sorting technique to efficiently share I/O, CPU and Memory thus enhancing dynamic resource sharing and improvising the utilization of idle instances proficiently. The model is referred as Multi-Tenant Dynamic Resource Scheduling Model (MTDRSM) .The MTDRSM support workload execution of different benchmark such as TPC-C(Transaction Processing Performance Council), YCSB(The Yahoo! Cloud Serving Benchmark)etc. and on different database such as MySQL, Oracle, H2 database etc. Experiments are conducted for different benchmarks with and without SLA compliances to evaluate the performance of MTDRSM in terms of latency and throughput achieved. The experiments show significant performance improvement over existing Mute Bench model in terms of latency and throughput
Optimal Distributed Scheduling in Wireless Networks under the SINR interference model
Radio resource sharing mechanisms are key to ensuring good performance in
wireless networks. In their seminal paper \cite{tassiulas1}, Tassiulas and
Ephremides introduced the Maximum Weighted Scheduling algorithm, and proved its
throughput-optimality. Since then, there have been extensive research efforts
to devise distributed implementations of this algorithm. Recently, distributed
adaptive CSMA scheduling schemes \cite{jiang08} have been proposed and shown to
be optimal, without the need of message passing among transmitters. However
their analysis relies on the assumption that interference can be accurately
modelled by a simple interference graph. In this paper, we consider the more
realistic and challenging SINR interference model. We present {\it the first
distributed scheduling algorithms that (i) are optimal under the SINR
interference model, and (ii) that do not require any message passing}. They are
based on a combination of a simple and efficient power allocation strategy
referred to as {\it Power Packing} and randomization techniques. We first
devise algorithms that are rate-optimal in the sense that they perform as well
as the best centralized scheduling schemes in scenarios where each transmitter
is aware of the rate at which it should send packets to the corresponding
receiver. We then extend these algorithms so that they reach
throughput-optimality
FaST-GShare: Enabling Efficient Spatio-Temporal GPU Sharing in Serverless Computing for Deep Learning Inference
Serverless computing (FaaS) has been extensively utilized for deep learning
(DL) inference due to the ease of deployment and pay-per-use benefits. However,
existing FaaS platforms utilize GPUs in a coarse manner for DL inferences,
without taking into account spatio-temporal resource multiplexing and
isolation, which results in severe GPU under-utilization, high usage expenses,
and SLO (Service Level Objectives) violation. There is an imperative need to
enable an efficient and SLO-aware GPU-sharing mechanism in serverless computing
to facilitate cost-effective DL inferences. In this paper, we propose
\textbf{FaST-GShare}, an efficient \textit{\textbf{Fa}aS-oriented
\textbf{S}patio-\textbf{T}emporal \textbf{G}PU \textbf{Sharing}} architecture
for deep learning inferences. In the architecture, we introduce the
FaST-Manager to limit and isolate spatio-temporal resources for GPU
multiplexing. In order to realize function performance, the automatic and
flexible FaST-Profiler is proposed to profile function throughput under various
resource allocations. Based on the profiling data and the isolation mechanism,
we introduce the FaST-Scheduler with heuristic auto-scaling and efficient
resource allocation to guarantee function SLOs. Meanwhile, FaST-Scheduler
schedules function with efficient GPU node selection to maximize GPU usage.
Furthermore, model sharing is exploited to mitigate memory contention. Our
prototype implementation on the OpenFaaS platform and experiments on
MLPerf-based benchmark prove that FaST-GShare can ensure resource isolation and
function SLOs. Compared to the time sharing mechanism, FaST-GShare can improve
throughput by 3.15x, GPU utilization by 1.34x, and SM (Streaming
Multiprocessor) occupancy by 3.13x on average.Comment: The paper has been accepted by ACM ICPP 202
A Novel Approach for Creating Consistent Trust and Cooperation (CTC) among Mobile Nodes of Ad Hoc Network
The final publication is available at www.springerlink.comThis paper provides a critical analysis of the recent research wok and its impact on the overall performance of a mobile Ad hoc network. In this paper, we clarify some of the misconceptions in the understating of selfishness and miss-behavior of nodes. Moreover, we propose a mathematical model that based on the time division technique to minimize the node misbehavior by avoiding unnecessary elimination of bad nodes. Our proposed approach not only improves the resource sharing but also creates a consistent trust and cooperation (CTC) environment among the mobile nodes. We believe, that the proposed mathematical model not only points out the weaknesses of the recent research work but also approximates the optimal values of the critical parameters such as throughput, transmission over head, channel capacity etc. The simulation results demonstrate the success of the proposed approach that significantly minimizes the malicious nodes and consequently maximizes the overall throughput of the Ad Hoc network than the other well known schemes.http://link.springer.com/chapter/10.1007/978-1-4020-8737-0_9
SYNPA: SMT Performance Analysis and Allocation of Threads to Cores in ARM Processors
Simultaneous multithreading processors improve throughput over
single-threaded processors thanks to sharing internal core resources among
instructions from distinct threads. However, resource sharing introduces
inter-thread interference within the core, which has a negative impact on
individual application performance and can significantly increase the
turnaround time of multi-program workloads. The severity of the interference
effects depends on the competing co-runners sharing the core. Thus, it can be
mitigated by applying a thread-to-core allocation policy that smartly selects
applications to be run in the same core to minimize their interference.
This paper presents SYNPA, a simple approach that dynamically allocates
threads to cores in an SMT processor based on their run-time dynamic behavior.
The approach uses a regression model to select synergistic pairs to mitigate
intra-core interference. The main novelty of SYNPA is that it uses just three
variables collected from the performance counters available in current ARM
processors at the dispatch stage. Experimental results show that SYNPA
outperforms the default Linux scheduler by around 36%, on average, in terms of
turnaround time in 8-application workloads combining frontend bound and backend
bound benchmarks.Comment: 11 pages, 9 figure
Controlled Matching Game for Resource Allocation and User Association in WLANs
In multi-rate IEEE 802.11 WLANs, the traditional user association based on
the strongest received signal and the well known anomaly of the MAC protocol
can lead to overloaded Access Points (APs), and poor or heterogeneous
performance. Our goal is to propose an alternative game-theoretic approach for
association. We model the joint resource allocation and user association as a
matching game with complementarities and peer effects consisting of selfish
players solely interested in their individual throughputs. Using recent
game-theoretic results we first show that various resource sharing protocols
actually fall in the scope of the set of stability-inducing resource allocation
schemes. The game makes an extensive use of the Nash bargaining and some of its
related properties that allow to control the incentives of the players. We show
that the proposed mechanism can greatly improve the efficiency of 802.11 with
heterogeneous nodes and reduce the negative impact of peer effects such as its
MAC anomaly. The mechanism can be implemented as a virtual connectivity
management layer to achieve efficient APs-user associations without
modification of the MAC layer
- …