6 research outputs found

    تحسين كفاءة التجميع الديناميكي للآلات الافتراضية في مراكز البيانات السحابية

    Get PDF
    يعتبر الاستخدام غير الكفء للموارد واحداً من أهم الأسباب التي تقف وراء الاستهلاك الضخم للطاقة في مراكز البيانات السحابية. ولحل هذه المشكلة قدم الباحثون تقنية التجميع الديناميكي للآلات الافتراضية، والتي تهدف إلى تجميع الآلات الافتراضية على أقل عدد ممكن من المضيفات (الآلات الفيزيائية). وعلى أية حال، فإن التجميع المكثّف للآلات الافتراضية يؤدي إلى زيادة عدد الآلات الافتراضية المهجرة، وإلى مضيفات تعاني من الحمل الزائد. وهذا ما ينعكس بدوره على جودة الخدمة للتطبيقات العاملة ضمن الآلة الافتراضية. لذلك من المهم أن تتم الموازنة بين ضمان جودة الخدمة وتوفير الطاقة. نقدم في هذا البحث خوارزمية التجميع الديناميكي المتكيفة الاستباقية (Proactive Adaptive Dynamic Consolidation) والتي تقوم بتخفيض كمية الطاقة المستهلكة مع المحافظة على مستويات الأداء المطلوبة في مركز البيانات السحابي. قمنا بإجراء تقييم تجريبي باستخدام أداة المحاكاة Cloudsim لاختبار فعالية الخورازمية المقدمة مع مسارات حمل من العالم الحقيقي ومقارنة أدائها مع مجموعة من الأعمال السابقة المقدمة في هذا المجال والتي تشمل الخورازميات التالية:  LR/MMT/SM/MBFD،LR/MMT/PA/RUA،LR/MMT/SM/Shi-AC، LR/MMT/SM/MFPED، ESS وقد أظهرت النتائج بأن الخورازمية المقترحة تتفوق على هذه الخورازميات من حيث استهلاك الطاقة وضمان جودة الخدمة، وعدد الآلات الافتراضية المهجرة.

    A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System

    Get PDF

    Energy Saving in QoS Fog-supported Data Centers

    Get PDF
    One of the most important challenges that cloud providers face in the explosive growth of data is to reduce the energy consumption of their designed, modern data centers. The majority of current research focuses on energy-efficient resources management in the infrastructure as a service (IaaS) model through "resources virtualization" - virtual machines and physical machines consolidation. However, actual virtualized data centers are not supporting communication–computing intensive real-time applications, big data stream computing (info-mobility applications, real-time video co-decoding). Indeed, imposing hard-limits on the overall per-job computing-plus-communication delays forces the overall networked computing infrastructure to quickly adopt its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Recently, Fog Computing centers are as promising commodities in Internet virtual computing platform that raising the energy consumption and making the critical issues on such platform. Therefore, it is expected to present some green solutions (i.e., support energy provisioning) that cover fog-supported delay-sensitive web applications. Moreover, the usage of traffic engineering-based methods dynamically keep up the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible, reliable technological paradigm and resource allocation algorithm to pay attention the consumed energy. Furthermore, these algorithms could automatically adapt themselves to time-varying workloads, joint reconfiguration, and orchestration of the virtualized computing-plus-communication resources available at the computing nodes. Besides, these methods facilitate things devices to operate under real-time constraints on the allowed computing-plus-communication delay and service latency. The purpose of this thesis is: i) to propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, where we detail the main building blocks and services of the corresponding technological platform and protocol stack; ii) propose a dynamic and adaptive energy-aware algorithm that models and manages virtualized networked data centers Fog Nodes (FNs), to minimize the resulting networking-plus-computing average energy consumption; and, iii) propose a novel Software-as-a-Service (SaaS) Fog Computing platform to integrate the user applications over the FoE. The emerging utilization of SaaS Fog Computing centers as an Internet virtual computing commodity is to support delay-sensitive applications. The main blocks of the virtualized Fog node, operating at the Middleware layer of the underlying protocol stack and comprises of: i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP connection. The salient features of this algorithm are that: i) it is adaptive and admits distributed scalable implementation; ii) it has the capacity to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) it explicitly accounts for the dynamic interaction between computing and networking resources in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of: i) client mobility; ii) wireless fading; iii) reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv) abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared to the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces
    corecore