13 research outputs found

    دراسة تحليلية لخوارزميات جدولة المهام الدورية في نظم الزمن الحقيقي على معالج متعدد النوى

    Get PDF
    Real Time Systems are considered nowadays as the most common systems, because its wide spread and usage in many areas including technical and applied researches, also the installation of such systems on multicore platforms made them very desirable as embedded systems and control units, because of its high performance and robustness comparing to multiprocessors platforms. Scheduling is the basic operation in real time systems, and relies in ordering the execution of tasks depending on priorities witch is set according to scheduling policies. This paper aims to introduce an analytical study of the most important scheduling algorithms to find the one with best performance according to number of parameters such as system load, context switching overheads, and scheduling overheads, when they applied to number of periodic tasks generated randomly. We use for this purpose SIMSO real time systems simulator, because of its reliability, robustness, and the support to a large number of scheduling algorithms, and cache memory simulation with its different levels witch is considered to be the main component in multicore platforms. تعتبر نظم الزمن الحقيقي من أكثر النظم شيوعاً في الوقت الحاضر، نظراً لانتشارها الواسع واستخدامها في شتى المجالات بما في ذلك الأبحاث التقنية والتطبيقية، فضلاً عن أنّ تطبيق مثل هذه النظم على منصات عمل متعددة النوى أضحى أمراً مرغوباً كما هو الحال في الأنظمة المضمنة ووحدات التحكم بسبب سرعة أداء مثل هذه المنصات وتماسكها مقارنة مع المنصات الأخرى متعددة المعالجات والتي تعاني من بطء في تبادل المعطيات المختلفة بسبب مقدار سرعة قنوات الاتصال بينها والذي عادة ما يكون أبطأ من ذلك الموجود في المنصات متعددة النوى. تشكل جدولة المهام محور عمل نظم الزمن الحقيقي، وهي في حقيقتها تقوم على مبدأ ترتيب تنفيذ المهام اعتماداً على الأفضليات المسندة لها، وتختلف عملية الإسناد هذه باختلاف الخوارزمية المتبعة في الجدولة. يهدف هذا البحث إلى تقديم دراسة تحليلية لأهم خوارزميات جدولة المهام الدورية وذلك لمقارنة أدائها على منصة متعددة النوى من حيث مجموعة من البارامترات مثل حمل المعالج، أعباء عمليات تبديل السياق، وأعباء اتخاذ قرار الجدولة، وذلك من أجل انتخاب الخوارزمية الأفضل من بين هذه الخوارزميات من حيث البارامترات المعتبرة عند تطبيقها على مجموعة من المهام الدورية المولدة عشوائياً. تم استخدام المحاكي SIMSO لهذا الغرض، وذلك بسبب موثوقيته ودعمه لعدد كبير من خوارزميات الجدولة، بالإضافة إلى أنه يحاكي استخدام ذاكرة الكاش بمستوياتها المختلفة والتي تعتبر حجر الأساس في بنية المنصات متعددة النوى

    مقارنة أداء خوارزميات جدولة المهام العشوائية في نظم الزمن الحقيقي

    Get PDF
    تشكل نظم الزمن الحقيقي اليوم النواة الأساسية لمعظم التطبيقات المستخدمة في مجالات تقنية المعلومات والاتصالات، كما أن سرعة تطور مثل هذه النظم جذب اهتمام الباحثين من أجل تحقيق أداء أمثل والتخلص قدر الإمكان من المشاكل والمساوئ التي تعاني منها بغية تحسين أدائها بما يتناسب مع حجم المهام الموكلة إليها. توجد العديد من التحديات الأساسية التي تواجه نظم الزمن الحقيقي والتي تتجسد بالدرجة الأولى في مشكلة جدولة تنفيذ المهام على نوى المعالج في هيكلية المعالجات متعددة النوى حيث تم اقتراح العديد من الطرق منها ما اعتمد الطريقة العامة والتي تكون فيها أي مهمة قابلة للتنفيذ على أية نواة، أو الطريقة المجزأة التي تعتمد على تخصيص نواة معينة لكل مجموعة محددة من المهام، وهناك أيضاً الطريقة شبه المجزأة وهي عبارة عن هجين من الطريقتين السابقتين حيث يتم تخصيص مجموعة من المهام لكي تنفذ على نواة معينة في حين يسمح لمهام أخرى بالتنفيذ على أية نواة من نوى المعالج. تم في هذا البحث مقارنة أداء خوارزميات جدولة المهام العشوائية على منصة متعددة النوىبهدف تحديد الخوارزمية الأفضل من ناحية مجموعة من البارامترات المعتمدة من قبل الباحثين في هذا المجال والتي بدورها تعطينا تفاصيل دقيقة حول جودة مثل هذه الخوارزميات عند تطبيقها على مجموعة من المهام العشوائية المولدة وفق التوزع الاحتمالي اللوغاريتمي الموحد. تمت عملية المحاكاة على البرنامج simso والذي أثبت موثوقية أداء عالية بشهادة العديد من الباحثين في هذا المجال فضلاً عن كونه يقدم إمكانية توليد المهام وفق توزعات احتمالية معينة، ويحاكي تفاصيل دقيقة متعلقة بخصائص المهام العشوائية. Nowadays, Real-time systems are considered as the core of most applications that used in Telecommunication and information technology areas. The rapid development of such systems has attracted researchers' attention to optimize performance and eliminate problems and disadvantagesas possible in order to improve their performance in proportion to the volume of tasks assigned to them. There are many challenges facing real-time systems, mainly the problem of task scheduling on processor cores in the multi-core processor architecture. Several schemes have been proposed, including the global scheme, where any task can be executed on any core ,The partitionedscheme which depends on the allocation of a specific core for each set of tasks. There is also the semi-partitionedscheme, which is a hybrid of the two previous schemes, where a set of tasks is assigned to execute on a specific core while other tasks are allowed to be executed on any core of processor. In this paper, we compare the performance of sporadic tasks scheduler algorithms on a multi-core platform in order to determine the best algorithm in terms of a set of parameters adopted by researchers in this field, which in turn gives us accurate details about the quality of such algorithms when applied to a set of sporadic tasks generated according to  uniformed Logarithmic probability distribution. The simulation is done using Simso simulator, which proved the reliability of high performance by the testimony of many researchers in this field, as it provides the possibility of generating tasks according to specific probability distributions, and simulates accurate details related to the characteristics of random task

    Challenges in real-time virtualization and predictable cloud computing

    Get PDF
    Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future

    Opportunistic CPU Sharing in Mobile Edge Computing Deploying the Cloud-RAN

    Get PDF
    Leveraging virtualization technology, Cloud-RAN deploys multiple virtual Base Band Units (vBBUs) along with collocated applications on the same Mobile Edge Computing (MEC) server. However, the performance of real-time (RT) applications such as the vBBU could potentially be impacted by sharing computing resources with collocated workloads. To address this challenge, this paper presents a dynamic CPU sharing mechanism, specifically designed for containerized virtualization in MEC servers, that hosts both RT and non-RT general-purpose applications. Initially, the CPU sharing problem in MEC servers is formulated as a Mixed-Integer Programming (MIP). Then, we present an algorithmic solution that breaks down the MIP into simpler subproblems that are then solved using efficient, constant factor heuristics. We assessed the performance of this mechanism against instances of a commercial solver. Further, via a small-scale testbed, we assessed various CPU sharing mechanisms and their effectiveness in reducing the impact of CPU sharing indicate that our CPU sharing mechanism reduces the worstcase execution time by more than 150% compared to the default host RT-Kernel approach. This evidence is strengthened when evaluating this mechanism within Cloud-RAN, in which vBBUs share resources with collocated applications on a MEC server. Using our CPU sharing approach, the vBBU’s scheduling latency decreases by up to 21% in comparison with the host RT-Kernel

    Real-Time Virtualization and Cloud Computing

    Get PDF
    In recent years, we have observed three major trends in the development of complex real-time embedded systems. First, to reduce cost and enhance flexibility, multiple systems are sharing common computing platforms via virtualization technology, instead of being deployed separately on physically isolated hosts. Second, multi-core processors are increasingly being used in real-time systems. Third, developers are exploring the possibilities of deploying real-time applications as virtual machines in a public cloud. The integration of real-time systems as virtual machines (VMs) atop common multi-core platforms in a public cloud raises significant new research challenges in meeting the real-time latency requirements of applications. In order to address the challenges of running real-time VMs in the cloud, we first present RT-Xen, a novel real-time scheduling framework within the popular Xen hypervisor. We start with single-core scheduling in RT-Xen, and present the first work that empirically studies and compares different real-time scheduling schemes on a same platform. We then introduce RT-Xen 2.0, which focuses on multi-core scheduling and spanning multiple design spaces, including priority schemes, server schemes, and scheduling policies. Experimental results demonstrate that when combined with compositional scheduling theory, RT-Xen can deliver real-time performance to an application running in a VM, while the default credit scheduler cannot. After that, we present RT-OpenStack, a cloud management system designed to support co-hosting real-time and non-real-time VMs in a cloud. RT-OpenStack studies the problem of running real-time VMs together with non-real-time VMs in a public cloud. Leveraging the resource interface and real-time scheduling provided by RT-Xen, RT-OpenStack provides real-time performance guarantees to real-time VMs, while achieving high resource utilization by allowing non-real-time VMs to share the remaining CPU resources through a novel VM-to-host mapping scheme. Finally, we present RTCA, a real-time communication architecture for VMs sharing a same host, which maintains low latency for high priority inter-domain communication (IDC) traffic in the face of low priority IDC traffic

    Real-Time Communication in Cloud Environments

    Get PDF
    Real-time communication is critical to emerging cloud applications from smart cities to industrial automation. The new class of latency-critical applications requires latency differentiation and performance isolation in a highly scalable fashion in a virtualized cloud environments. This dissertation aims to develop novel cloud architecture and services to support real-time communication at both the platform and infrastructure layers. At the platform layer, we build SRTM, a scalable and real-time messaging middleware (platform) that features (1) latency differentiation, (2) service isolation through rate limiting, and (3) scalability through load distribution among messaging brokers. A key contribution of SRTM lies in the exploitation of the complex interactions among rate limiting and load distribution. At the infrastructure layer, we develop VATC, a virtualization-aware traffic control framework in virtualized hosts. VATC provides a novel network I/O architecture that achieves differentiated packet processing with rate limiting while being scalable on multi-core CPUs. The research is evaluated in a cloud testbed in the context of Internet of Things applications

    Real-time multi-core virtual machine scheduling in xen

    No full text
    Recent years have witnessed two major trends in the development of complex real-time embedded systems. First, to reduce cost and enhance flexibility, multiple systems are sharing common computing platforms via virtualization technology, instead of being deployed separately on physically isolated hosts. Second, multicore processors are increasingly being used in real-time systems. The integration of real-time systems as virtual machines (VMs) atop common multicore platforms raises significant new research challenges in meeting the real-time performance requirements of multiple systems. This paper advances the state of the art in real-time virtualization by designing and implementing RT-Xen 2.0, a new real-time multicore VM scheduling framework in the popular Xen virtual machine monitor (VMM). RT-Xen 2.0 realizes a suite of real-time VM scheduling policies spanning the design space. We implement both global and partitioned VM schedulers; each scheduler can be configured to support dynamic or static priorities and to run VMs as periodic or deferrable servers. We present a comprehensive experimental evaluation that provides important insights into real-time scheduling on virtualized multicore platforms: (1) both global and partitioned VM scheduling can be implemented in the VMM at moderate overhead; (2) at the VMM level, while compositional scheduling theory shows partitioned EDF (pEDF) is better than global EDF (gEDF) in providing schedulability guarantees, in our experiments their performance is reversed in terms of the fraction of workloads that meet their deadlines on virtualized multicore platforms; (3) at the guest OS level, pEDF requests a smaller total VCPU bandwidth than gEDF based on compositional scheduling analysis, and therefore using pEDF at the guest OS level leads to more schedulable workloads in our experiments; (4) a combination of pEDF in the guest OS and gEDF in the VMM – configured with deferrable server – leads to the highest fraction of schedulable task sets compared to other real-time VM scheduling policies; and (5) on a platform with a shared last-level cache, the benefits of global scheduling outweigh the cache penalty incurred by VM migration

    Real-Time Multi-Core Virtual Machine Scheduling in Xen

    No full text
    Recent years have witnessed two major trends in the development of complex real-time embedded systems. First, to reduce cost and enhance flexibility, multiple systems are sharing common computing platforms via virtualization technology, instead of being deployed separately on physically isolated hosts. Second, multicore processors are increasingly being used in real-time systems. The integration of real-time systems as virtual machines (VMs) atop common multicore platforms raises significant new research challenges in meeting the real-time performance requirements of multiple systems. This paper advances the state of the art in real-time virtualization by designing and implementing RT-Xen 2.0, a new real-time multicore VM scheduling framework in the popular Xen virtual machine monitor (VMM). RT-Xen 2.0 realizes a suite of real-time VM scheduling policies spanning the design space. We implement both global and partitioned VM schedulers; each scheduler can be configured to support dynamic or static priorities and to run VMs as periodic or deferrable servers. We present a comprehensive experimental evaluation that provides important insights into real-time scheduling on virtualized multicore platforms: (1) both global and partitioned VM scheduling can be implemented in the VMM at moderate overhead; (2) at the VMM level, while compositional scheduling theory shows partitioned EDF (pEDF) is better than global EDF (gEDF) in providing schedulability guarantees, in our experiments their performance is reversed in terms of the fraction of workloads that meet their deadlines on virtualized multicore platforms; (3) at the guest OS level, pEDF requests a smaller total VCPU bandwidth than gEDF based on compositional scheduling analysis, and therefore using pEDF at the guest OS level leads to more schedulable workloads in our experiments; (4) a combination of pEDF in the guest OS and gEDF in the VMM – configured with deferrable server – leads to the highest fraction of schedulable task sets compared to other real-time VM scheduling policies; and (5) on a platform with a shared last-level cache, the benefits of global scheduling outweigh the cache penalty incurred by VM migration

    Scheduling for Mixed-criticality Hypervisor Systems in the Automotive Domain

    Get PDF
    This thesis focuses on scheduling for hypervisor systems in the automotive domain. Current practices are primarily implementation-agnostic or are limited by lack of visibility during the execution of partitions. The tasks executed within the partitions are classified as event-triggered or time-triggered. A scheduling model is developed using a pair of a deferrable server and a periodic server per partition to provide low latency for event-triggered tasks and maximising utilisation. The developed approach enforces temporal isolation between partitions and ensures that time-triggered tasks do not suffer from starvation. The scheduling model was extended to support three criticality levels with two degraded modes. The first degraded mode provides the partitions with additional capacity by trading-off low latency of event-driven tasks with lower overheads and utilisation. Both models were evaluated by forming a case study using real ECU application code. A second case study was formed inspired from the Olympus Attitude and Orbital Control System (AOCS) to further evaluate the proposed mixed-criticality model. To conclude, the contributions of this thesis are addressed with respect to the research hypothesis and possible avenues for future work are identified
    corecore