39 research outputs found
Optimizing datacenter power with memory system levers for guaranteed quality-of-service
pre-printCo-location of applications is a proven technique to improve hardware utilization. Recent advances in virtualization have made co-location of independent applications on shared hardware a common scenario in datacenters. Colocation, while maintaining Quality-of-Service (QoS) for each application is a complex problem that is fast gaining relevance for these datacenters. The problem is exacerbated by the need for effective resource utilization at datacenter scales. In this work, we show that the memory system is a primary bottleneck in many workloads and is a more effective focal point when enforcing QoS. We examine four different memory system levers to enforce QoS: two that have been previously proposed, and two novel levers. We compare the effectiveness of each lever in minimizing power and resource needs, while enforcing QoS guarantees. We also evaluate the effectiveness of combining various levers and show that this combined approach can yield power reductions of up to 28%
Power Management Techniques for Data Centers: A Survey
With growing use of internet and exponential growth in amount of data to be
stored and processed (known as 'big data'), the size of data centers has
greatly increased. This, however, has resulted in significant increase in the
power consumption of the data centers. For this reason, managing power
consumption of data centers has become essential. In this paper, we highlight
the need of achieving energy efficiency in data centers and survey several
recent architectural techniques designed for power management of data centers.
We also present a classification of these techniques based on their
characteristics. This paper aims to provide insights into the techniques for
improving energy efficiency of data centers and encourage the designers to
invent novel solutions for managing the large power dissipation of data
centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy
Efficiency, Green Computing, DVFS, Server Consolidatio
Avoiding Information Leakage in the Memory Controller with Fixed Service Policies
ABSTRACT Trusted applications frequently execute in tandem with untrusted applications on personal devices and in cloud environments. Since these co-scheduled applications share hardware resources, the latencies encountered by the untrusted application betray information about whether the trusted applications are accessing shared resources or not. Prior studies have shown that such information leaks can be used by the untrusted application to decipher keys or launch covert-channel attacks. Prior work has also proposed techniques to eliminate information leakage in various shared resources. However, the best known solution to eliminate information leakage in the memory system incurs high performance penalties. This work develops a comprehensive approach to eliminate timing channels in the memory controller that has two key elements: (i) We shape the memory access behavior of every thread so that every thread appears identical to the memory system and to potential attackers. (ii) We show how efficient memory access pipelines can be constructed to process the resulting memory accesses without introducing any resource conflicts. We mathematically show that the proposed system yields zero information leakage. We then show that various page mapping policies can impact the throughput of our secure memory system. We also introduce techniques to re-order requests from different threads to boost performance without leaking information. Our best solution offers throughput that is 26% lower than that of an optimized non-secure baseline, and that is 70% higher than the best known competing scheme
English for Masters of Computing
ΠΠΎΡΠΎΠ±ΠΈΠ΅ ΠΏΡΠ΅Π΄Π½Π°Π·Π½Π°ΡΠ΅Π½ΠΎ Π΄Π»Ρ ΡΡΡΠ΄Π΅Π½ΡΠΎΠ²-ΠΌΠ°Π³ΠΈΡΡΡΠΎΠ² ΠΠΠΠΈΠΠ’-ΠΠΠ ΡΡΠΎΠ²Π½Ρ Π2/B1 ΠΈ ΠΏΡΠ΅Π΄ΡΡΠ°Π²Π»ΡΠ΅Ρ ΡΠ±ΠΎΡΠ½ΠΈΠΊ Π°ΡΡΠ΅Π½ΡΠΈΡΠ½ΡΡ
ΡΠ΅ΠΊΡΡΠΎΠ², ΠΎΡ
Π²Π°ΡΡΠ²Π°ΡΡΠΈΠΉ ΡΠ°Π·Π».ΠΎΠ±Π»Π°ΡΡΠΈ ΠΏΡΠΈΠΊΠ»Π°Π΄Π½ΠΎΠΉ ΠΌΠ°ΡΠ΅ΠΌΠ°ΡΠΈΠΊΠΈ. ΠΡΠΎ ΠΏΠΎΠ·Π²ΠΎΠ»ΠΈΡ ΠΌΠ°Π³ΠΈΡΡΡΠ°ΠΌ ΡΠ°ΡΡΠΈΡΠΈΡΡ ΡΠ»ΠΎΠ²Π°ΡΠ½ΡΠΉ Π·Π°ΠΏΠ°Ρ ΠΈ Π½Π°Π±ΡΠ°ΡΡ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΡΡ ΠΏΡΠΎΡΠ΅ΡΡΠΈΠΎΠ½Π°Π»ΡΠ½ΡΡ Π»Π΅ΠΊΡΠΈΠΊΡ, Π° ΡΠ°ΠΊΠΆΠ΅ ΠΎΡΡΠ°Π±ΠΎΡΠ°ΡΡ Π½Π°Π²ΡΠΊ Π°Π½Π½ΠΎΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΠΎΡΠΎΠ±ΠΈΠ΅ Π²ΠΊΠ»ΡΡΠ°Π΅Ρ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΡΠ΅Π½Π½ΡΡ
ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΠΉ: ΠΏΡΠ°Π²ΠΈΠ»Π° ΡΡΠ΅Π½ΠΈΡ ΠΌΠ°ΡΠ΅ΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈΡ
ΡΠΎΡΠΌΡΠ», Π²ΡΡΠ°ΠΆΠ΅Π½ΠΈΠΉ Π΄Π»Ρ Π°Π½Π½ΠΎΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΡΡΠ°ΡΠ΅ΠΉ, Π° ΡΠ°ΠΊΠΆΠ΅ ΡΠ΅ΠΊΡΡΠΎΠ² Π΄Π»Ρ Π°Π½Π½ΠΎΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ.12
?????????, ?????????, ????????? ?????? ???????????? ????????? ????????? ??????????????? ???????????? ????????? ??????????????? ??????
Department of Computer Science and EngineeringHardware with advanced functionalities and/or improved performance and efficiency has been introduced in modern computer systems. However, there exist several challenges with such emerging hardware. First, the characteristics of emerging hardware are unknown but deriving useful properties through characterization studies is hard because emerging hardware has different effects on applications with different characteristics. Second, sole use of emerging hardware is suboptimal but coordination of emerging hardware and other techniques is hard due to large and complex system state space. To address the problem, we first conduct in-depth characterization studies for emerging hardware based on applications with various characteristics. Guided by the observations from our characterization studies, we propose a set of system software techniques to effectively leverage emerging hardware. The system software techniques combine emerging hardware and other techniques to improve the performance, efficiency, and fairness of computer systems based on efficient optimization algorithms.
First, we investigate system software techniques to effectively manage hardware-based last-level cache (LLC) and memory bandwidth partitioning functionalities. For effective memory bandwidth partitioning on commodity servers, we propose HyPart, a hybrid technique for practical memory bandwidth partitioning on commodity servers. HyPart combines the three widely used memory bandwidth partitioning techniques (i.e., thread packing, clock modulation, and Intel MBA) in a coordinated manner considering the characteristics of the target applications. We demonstrate the effectiveness of HyPart through the quantitative evaluation.
We also propose CoPart, coordinated partitioning of LLC and memory bandwidth for fairness-aware workload consolidation on commodity servers. We first characterize the impact of LLC and memory bandwidth partitioning on the performance and fairness of the consolidated workloads. Guided by the characterization, we design and implement CoPart. CoPart dynamically profiles the characteristics of the consolidated workloads and partitions LLC and memory bandwidth in a coordinated manner to maximize the fairness of the consolidated workloads. Through the quantitative evaluation with various workloads and system configurations, we demonstrate the effectiveness of CoPart in the sense that it significantly improves the overall fairness of the consolidated workloads.
Second, we investigate a system software technique to effectively leverage hardware-based power capping functionality. We first characterize the performance impact of the two key system knobs (i.e., concurrency level of the target applications and cross component power allocation) for power capping. Guided by the characterization results, we design and implement RPPC, a holistic runtime system for maximizing performance under power capping. RPPC dynamically controls the key system knobs in a cooperative manner considering the characteristics (e.g., scalability and memory intensity) of the target applications. Our evaluation results show the effectiveness of RPPC in the sense that it significantly improves the performance under power capping on various application and system configurations.
Third, we investigate system software techniques for effective dynamic concurrency control on many-core systems and heterogeneous multiprocessing systems. We propose RMC, an integrated runtime system for adaptive many-core computing. RMC combines the two widely used dynamic concurrency control techniques (i.e., thread packing and dynamic threading) in a coordinated manner to exploit the advantages of both techniques. RMC quickly controls the concurrency level of the target applications through the thread packing technique to improve the performance and efficiency. RMC further improves the performance and efficiency by determining the optimal thread count through the dynamic threading technique. Our quantitative experiments show the effectiveness of RMC in the sense that it outperforms the existing dynamic concurrency control techniques in terms of the performance and energy efficiency.
In addition, we also propose PALM, progress- and locality-aware adaptive task migration for efficient thread packing. We first conduct an in-depth performance analysis of thread packing with various synchronization-intensive benchmarks and system configurations and find the root causes of the performance pathologies of thread packing. Based on the characterization results, we design and implement PALM, which supports both of symmetric multiprocessing systems and heterogeneous multiprocessing systems. For efficient thread packing, PALM solves the three key problems, progress-aware task migration, locality-aware task migration, and scheduling period control. Our quantitative evaluation explains the effectiveness of PALM in the sense that it achieves substantially higher performance and energy efficiency than the conventional thread packing. We also present case studies in which PALM considerably improves the efficiency of dynamic server consolidation and the performance under power capping.ope
Service level agreement specification for IoT application workflow activity deployment, configuration and monitoring
PhD ThesisCurrently, we see the use of the Internet of Things (IoT) within various domains
such as healthcare, smart homes, smart cars, smart-x applications, and smart
cities. The number of applications based on IoT and cloud computing is projected
to increase rapidly over the next few years. IoT-based services must meet
the guaranteed levels of quality of service (QoS) to match usersβ expectations.
Ensuring QoS through specifying the QoS constraints using service level agreements
(SLAs) is crucial. Also because of the potentially highly complex nature
of multi-layered IoT applications, lifecycle management (deployment, dynamic
reconfiguration, and monitoring) needs to be automated. To achieve this it is
essential to be able to specify SLAs in a machine-readable format.
currently available SLA specification languages are unable to accommodate
the unique characteristics (interdependency of its multi-layers) of the IoT domain.
Therefore, in this research, we propose a grammar for a syntactical structure
of an SLA specification for IoT. The grammar is based on a proposed conceptual
model that considers the main concepts that can be used to express the requirements
for most common hardware and software components of an IoT application
on an end-to-end basis. We follow the Goal Question Metric (GQM) approach to
evaluate the generality and expressiveness of the proposed grammar by reviewing
its concepts and their predefined lists of vocabularies against two use-cases
with a number of participants whose research interests are mainly related to IoT.
The results of the analysis show that the proposed grammar achieved 91.70% of
its generality goal and 93.43% of its expressiveness goal.
To enhance the process of specifying SLA terms, We then developed a toolkit
for creating SLA specifications for IoT applications. The toolkit is used to simplify
the process of capturing the requirements of IoT applications. We demonstrate
the effectiveness of the toolkit using a remote health monitoring service (RHMS)
use-case as well as applying a user experience measure to evaluate the tool by
applying a questionnaire-oriented approach. We discussed the applicability of our
tool by including it as a core component of two different applications: 1) a contextaware
recommender system for IoT configuration across layers; and 2) a tool for
automatically translating an SLA from JSON to a smart contract, deploying it
on different peer nodes that represent the contractual parties. The smart contract
is able to monitor the created SLA using Blockchain technology. These two
applications are utilized within our proposed SLA management framework for IoT.
Furthermore, we propose a greedy heuristic algorithm to decentralize workflow
activities of an IoT application across Edge and Cloud resources to enhance
response time, cost, energy consumption and network usage. We evaluated the
efficiency of our proposed approach using iFogSim simulator. The performance
analysis shows that the proposed algorithm minimized cost, execution time, networking,
and Cloud energy consumption compared to Cloud-only and edge-ward
placement approaches
Building a Cloud Computing Program to Improve Operating Efficiency and Enable Innovation
This workplace challenge was conducted at Geisinger Health, an $8 billion integrated health system located in central Pennsylvania. It is focused on the development of a cloud strategy for Geisinger Health. The workplace challenge option is broken up into 5 sections: organization assessment, plan for a new program, program evaluation, economic evaluation, and discussion of implications.
Organizational Assessment: We leveraged Gartnerβs βDigital Business Maturityβ self-assessment framework. The Gartner framework is scored across nine digital business competencies. To summarize, Geisingerβs overall score fell into the βdigital intermediateβ category. Geisinger scored on par to the healthcare industry but still has opportunities to improve. This reinforces the need to transform digitally as well as to be more agile from a business operating model perspective. The cloud strategy will help to enable this strategic priority.
New Program: Geisingerβs current on-premises server--computing environment is a quagmire of legacy processes, methodologies, and technologies. To support Geisinger into the future, we have developed a cloud first strategy and we believe this approach to be key for both new and existing workloads. We also completed the vendor selection process, developed the implementation plan and overall approach to migrate to the cloud.
Program Evaluation: We developed a plan for two proof of concept environments and thorough testing scenarios to simulate real environments in the cloud. We tested application performance and user experience while under maximum load. We also measured resource requirements, both human, and computing, to validate our initial assumptions. Furthermore, we verified that our assumptions on the financial impacts of moving workloads to the cloud.
Economic Evaluation: We built a business case starting with basic cash flow analysis based on total cost of ownership and payback period. Next, we calculated the return on investment and net present value to estimate future savings and overall value of the program.
Implications and Lessons Learned: We identified many opportunities through lessons learned and will implement them in the next iteration of our cloud migration. Overall, there were no significant obstacles, and we are proceeding cautiously with our implementation
ERP implementation methodologies and frameworks: a literature review
Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history