130 research outputs found

    Precise energy efficient scheduling of mixed-criticality tasks & sustainable mixed-criticality scheduling

    Get PDF
    In this thesis, the imprecise mixed-criticality model (IMC) is extended to precise scheduling of tasks, and integrated with the dynamic voltage and frequency scaling (DVFS) technique to enable energy minimization. The challenge in precise scheduling of MC systems is to simultaneously guarantee the timing correctness for all tasks, hi and lo, under both pessimistic and optimistic (less pessimistic) assumptions. To the best of knowledge this is the first work to address the integration of DVFS energy conserving techniques with precise scheduling of lo-tasks of the MC model. In this thesis, the utilization based schedulability tests and sufficient conditions for such systems under Earliest Deadline First EDF-VD scheduling policy are presented. Quantitative study in the forms of speedup bound and approximation ratio are also proved for the unified model. Extensive experimental studies are conducted to verify the theoretical results as well as the effectiveness of the proposed algorithm. In safety- critical systems, it is essential to perform schedulability analysis prior to run-time. Parameters characterizing the run-time workload are generated by pessimistic techniques; hence, adopting conservative estimates may result in systems performing much better than anticipated during run-time. This thesis also addresses the following questions associated to the better performance of the task system: (i) How does parameter change affect the schedulability of a task set (system)? (ii) In the event that a mixed-criticality system design is deemed schedulable and specific part/parts of the system are reassigned to be of low-criticality, is the system still safe to run? (iii) If a system is presumed to be non-schedulable, does it invariably benefit to reduce the criticality of some task? To answer these questions, in this thesis, we not only study the property of sustainability with regards to criticality levels, but also revisit sustainability of several uniprocessor and multiprocessor scheduling policies with respect to other parameters --Abstract, page iii

    Radio Resource Management Scheme for URLLC and EMBB coexistence in a Cell-Less Radio Access network

    Get PDF
    We address the latency challenges in a high-density and high-load scenario for an ultra-reliable and low-latency communication (URLLC) network which may coexist with enhanced mobile broadband (eMBB) services in the evolving wireless communication networks. We propose a new radio resource management (RRM) scheme consisting of a combination of time domain (TD) and frequency domain (FD) schedulers specific for URLLC and eMBB users. We also develop a user ranking algorithm from a radio unit (RU) perspective, which is employed by the TD scheduler to increase the efficiency of scheduling in terms of resource consumption in large-scale networks. Therefore, the optimized and novel resource scheduling scheme reduces latency for the URLLC users (requesting a URLLC service) in an efficient resource utilization manner to support scenarios with high user density. At the same time, this RRM scheme, while minimizing the latency, it also overcomes another important challenge of eMBB users (requesting an eMBB service), namely the throughput of those who coexist in such highly loaded scenario with URLLC users. The effectiveness of our proposed scheme including time and frequency domain (TD and FD) schedulers is analyzed. Simulation results show that the proposed scheme improves the latency of URLLC users and throughput of the eMBB users compared to the baseline scheme. The proposed scheme has a 29% latency improvement for URLLC and 90% signal-to-interference-plus-noise ratio (SINR) improvement for eMBB users as compared with conventional scheduling policies.This work was supported by the European Union H2020 Research and Innovation Programme funded by the Marie SkƂodowska-Curie ITN TeamUp5G Project under Grant 813391

    COMBINING HARDWARE MANAGEMENT WITH MIXED-CRITICALITY PROVISIONING IN MULTICORE REAL-TIME SYSTEMS

    Get PDF
    Safety-critical applications in cyber-physical domains such as avionics and automotive systems require strict timing constraints because loss of life or severe financial repercussions may occur if they fail to produce correct outputs at the right moment. We call such systems “real-time systems.” When designing a real-time system, a multicore platform would be desirable to use because such platforms have advantages in size, weight, and power constraints, especially in embedded systems. However, the multicore revolution is having limited impact in safety-critical application domains. A key reason is the “one-out-of-m” problem: when validating real-time constraints on an m-core platform, excessive analysis pessimism can effectively negate the processing capacity of the additional m-1 cores so that only “one core’s worth” of capacity is available. The root of this problem is that shared hardware resources are not predictably managed. Two approaches have been investigated previously to address this problem: mixed-criticality analysis, which provision less-critical software components less pessimistically, and hardware-management techniques, which make the underlying platform itself more predictable. The goal of the research presented in this dissertation is to combine both approaches to reduce the capacity loss caused by contention for shared hardware resources in multicore platforms. Towards that goal, fundamentally new criticality-cognizant hardware-management tradeoffs must be explored. Such tradeoffs are investigated in the context of a new variant of a mixed-criticality framework, called MC2, that supports configurable criticality-based hardware management. This framework allows specific DRAM banks and areas of the last-level cache to be allocated to certain groups of tasks to provide criticality-aware isolation. MC2 is further extended to support the sharing of memory locations, which is required to realize the ability to support real-world workloads. We evaluate the impact of combining mixed-criticality provisioning and hardware-management techniques with both micro-benchmark experiments and schedulability studies. In our micro-benchmark experiments, we evaluate each hardware-management technique and consider tradeoffs that arise when applying them together. The effectiveness of the overall framework in resolving such tradeoffs is investigated via largescale overhead-aware schedulability studies. Our results demonstrate that mixed-criticality analysis and hardware-management techniques can be much more effective when applied together instead of alone.Doctor of Philosoph

    A Survey of Research into Mixed Criticality Systems

    Get PDF
    This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards

    Open Cell-less Network Architecture and Radio Resource Management for Future Wireless Communication Systems

    Get PDF
    In recent times, the immense growth of wireless traffic data generated from massive mobile devices, services, and applications results in an ever-increasing demand for huge bandwidth and very low latency, with the future networks going in the direction of achieving extreme system capacity and ultra reliable low latency communication (URLLC). Several consortia comprising major international mobile operators, infrastructure manufacturers, and academic institutions are working to develop and evolve the current generation of wireless communication systems, i.e., fifth generation (5G) towards a sixth generation (6G) to support improved data rates, reliability, and latency. Existing 5G networks are facing the latency challenges in a high-density and high-load scenario for an URLLC network which may coexist with enhanced mobile broadband (eMBB) services. At the same time, the evolution of mobile communications faces the important challenge of increased network power consumption. Thus, energy efficient solutions are expected to be deployed in the network in order to reduce power consumption while fulfilling user demands for various user densities. Moreover, the network architecture should be dynamic according to the new use cases and applications. Also, there are network migration challenges for the multi-architecture coexistence networks. Recently, the open radio access network (O-RAN) alliance was formed to evolve RANs with its core principles being intelligence and openness. It aims to drive the mobile industry towards an ecosystem of innovative, multi-vendor, interoperable, and autonomous RAN, with reduced cost, improved performance and greater agility. However, this is not standardized yet and still lacks interoperability. On the other hand, the cell-less radio access network (RAN) was introduced to boost the system performance required for the new services. However, the concept of cell-less RAN is still under consideration from the deployment point of view with the legacy cellular networks. The virtualization, centralization and cooperative communication which enables the cell-less RAN can further benefit from O-RAN based architecture. This thesis addresses the research challenges facing 5G and beyond networks towards 6G networks in regard to new architectures, spectral efficiency, latency, and energy efficiency. Different system models are stated according to the problem and several solution schemes are proposed and developed to overcome these challenges. This thesis contributes as follows. Firstly, the cell-less technology is proposed to be implemented through an Open RAN architecture, which could be supervised with the near real-time RAN intelligent controller (near-RT-RIC). The cooperation is enabled for intelligent and smart resource allocation for the entire RAN. Secondly, an efficient radio resource optimization mechanism is proposed for the cell-less architecture to improve the system capacity of the future 6G networks. Thirdly, an optimized and novel resource scheduling scheme is presented that reduces latency for the URLLC users in an efficient resource utilization manner to support scenarios with high user density. At the same time, this radio resource management (RRM) scheme, while minimizing the latency, also overcomes another important challenge of eMBB users, namely the throughput of those who coexist in such a highly loaded scenario with URLLC users. Fourthly, a novel energy-efficiency enhancement scheme, i.e., (3 × E) is designed to increase the transmission rate per energy unit, with stable performance within the cell-less RAN architecture. Our proposed (3 × E) scheme activates two-step sleep modes (i.e., certain phase and conditional phase) through the intelligent interference management for temporarily switching access points (APs) to sleep, optimizing the network energy efficiency (EE) in highly loaded scenarios, as well as in scenarios with lower load. Finally, a multi-architecture coexistence (MACO) network model is proposed to enable inter-connection of different architectures through coexistence and cooperation logical switches in order to enable smooth deployment of a cell-less architecture within the legacy networks. The research presented in this thesis therefore contributes new knowledge in the cellless RAN architecture domain of the future generation wireless networks and makes important contributions to this field by investigating different system models and proposing solutions to significant issues.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: Matilde Pilar SĂĄnchez FernĂĄndez.- Secretario: Alberto Álvarez Polegre.- Vocal: JosĂ© Francisco Monserrat del RĂ­

    Scaling Up Concurrent Analytical Workloads on Multi-Core Servers

    Get PDF
    Today, an ever-increasing number of researchers, businesses, and data scientists collect and analyze massive amounts of data in database systems. The database system needs to process the resulting highly concurrent analytical workloads by exploiting modern multi-socket multi-core processor systems with non-uniform memory access (NUMA) architectures and increasing memory sizes. Conventional execution engines, however, are not designed for many cores, and neither scale nor perform efficiently on modern multi-core NUMA architectures. Firstly, their query-centric approach, where each query is optimized and evaluated independently, can result in unnecessary contention for hardware resources due to redundant work found across queries in highly concurrent workloads. Secondly, they are unaware of the non-uniform memory access costs and the underlying hardware topology, incurring unnecessarily expensive memory accesses and bandwidth saturation. In this thesis, we show how these scalability and performance impediments can be solved by exploiting sharing among concurrent queries and incorporating NUMA-aware adaptive task scheduling and data placement strategies in the execution engine. Regarding sharing, we identify and categorize state-of-the-art techniques for sharing data and work across concurrent queries at run-time into two categories: reactive sharing, which shares intermediate results across common query sub-plans, and proactive sharing, which builds a global query plan with shared operators to evaluate queries. We integrate the original research prototypes that introduce reactive and proactive sharing, perform a sensitivity analysis, and show how and when each technique benefits performance. Our most significant finding is that reactive and proactive sharing can be combined to exploit the advantages of both sharing techniques for highly concurrent analytical workloads. Regarding NUMA-awareness, we identify, implement, and compare various combinations of task scheduling and data placement strategies under a diverse set of highly concurrent analytical workloads. We develop a prototype based on a commercial main-memory column-store database system. Our most significant finding is that there is no single strategy for task scheduling and data placement that is best for all workloads. In specific, inter-socket stealing of memory-intensive tasks can hurt overall performance, and unnecessary partitioning of data across sockets involves an overhead. For this reason, we implement algorithms that adapt task scheduling and data placement to the workload at run-time. Our experiments show that both sharing and NUMA-awareness can significantly improve the performance and scalability of highly concurrent analytical workloads on modern multi-core servers. Thus, we argue that sharing and NUMA-awareness are key factors for supporting faster processing of big data analytical applications, fully exploiting the hardware resources of modern multi-core servers, and for more responsive user experience

    A study in grid simulation and scheduling

    Get PDF
    Grid computing is emerging as an essential tool for large scale analysis and problem solving in scientific and business domains. Whilst the idea of stealing unused processor cycles is as old as the Internet, we are still far from reaching a position where many distributed resources can be seamlessly utilised on demand. One major issue preventing this vision is deciding how to effectively manage the remote resources and how to schedule the tasks amongst these resources. This thesis describes an investigation into Grid computing, specifically the problem of Grid scheduling. This complex problem has many unique features making it particularly difficult to solve and as a result many current Grid systems employ simplistic, inefficient solutions. This work describes the development of a simulation tool, G-Sim, which can be used to test the effectiveness of potential Grid scheduling algorithms under realistic operating conditions. This tool is used to analyse the effectiveness of a simple, novel scheduling technique in numerous scenarios. The results are positive and show that it could be applied to current procedures to enhance performance and decrease the negative effect of resource failure. Finally a conversion between the Grid scheduling problem and the classic computing problem SAT is provided. Such a conversion allows for the possibility of applying sophisticated SAT solving procedures to Grid scheduling providing potentially effective solutions

    Multiple Track Performance of a Digital Magnetic Tape System : Experimental Study and Simulation using Parallel Processing Techniques

    Get PDF
    The primary aim of the magnetic recording industry is to increase storage capacities and transfer rates whilst maintaining or reducing costs. In multiple-track tape systems, as recorded track dimensions decrease, higher precision tape transport mechanisms and dedicated coding circuitry are required. This leads to increased manufacturing costs and a loss of flexibility. This thesis reports on the performance of a low precision low-cost multiple-track tape transport system. Software based techniques to study system performance, and to compensate for the mechanical deficiencies of this system were developed using occam and the transputer. The inherent parallelism of the multiple-track format was exploited by integrating a transputer into the recording channel to perform the signal processing tasks. An innovative model of the recording channel, written exclusively in occam, was developed. The effect of parameters, such as data rate, track dimensions and head misregistration on system performance was determined from the detailed error profile produced. This model may be run on a network of transputers, allowing its speed of execution to be scaled to suit the investigation. These features, combined with its modular flexibility makes it a powerful tool that may be applied to other multiple-track systems, such as digital HDTV. A greater understanding of the effects of mechanical deficiencies on the performance of multiple-track systems was gained from this study. This led to the development of a software based compensation scheme to reduce the effects of Lateral Head Displacement and allow low-cost tape transport mechanisms to be used with narrow, closely spaced tracks, facilitating higher packing densities. The experimental and simulated investigation of system performance, the development of the model and compensation scheme using parallel processing techniques has led to the publication of a paper and two further publications are expected.Thorn EMI, Central Research Laboratories, Hayes, Middlese
    • 

    corecore