4,935 research outputs found

    The Road towards Predictable Automotive High-Performance Platforms

    Get PDF
    Due to the trends of centralizing the E/E architecture and new computing-intensive applications, high-performance hardware platforms are currently finding their way into automotive systems. However, the SoCs currently available on the market have significant weaknesses when it comes to providing predictable performance for time-critical applications. The main reason for this is that these platforms are optimized for averagecase performance. This shortcoming represents one major risk in the development of current and future automotive systems. In this paper we describe how high-performance and predictability could (and should) be reconciled in future HW/SW platforms. We believe that this goal can only be reached in a close collaboration between system suppliers, IP providers, semiconductor companies, and OS/hypervisor vendors. Furthermore, academic input will be needed to solve remaining challenges and to further improve initial solutions

    Adaptive Dispatching of Tasks in the Cloud

    Full text link
    The increasingly wide application of Cloud Computing enables the consolidation of tens of thousands of applications in shared infrastructures. Thus, meeting the quality of service requirements of so many diverse applications in such shared resource environments has become a real challenge, especially since the characteristics and workload of applications differ widely and may change over time. This paper presents an experimental system that can exploit a variety of online quality of service aware adaptive task allocation schemes, and three such schemes are designed and compared. These are a measurement driven algorithm that uses reinforcement learning, secondly a "sensible" allocation algorithm that assigns jobs to sub-systems that are observed to provide a lower response time, and then an algorithm that splits the job arrival stream into sub-streams at rates computed from the hosts' processing capabilities. All of these schemes are compared via measurements among themselves and with a simple round-robin scheduler, on two experimental test-beds with homogeneous and heterogeneous hosts having different processing capacities.Comment: 10 pages, 9 figure

    Cross-layer signalling and middleware: a survey for inelastic soft real-time applications in MANETs

    Get PDF
    This paper provides a review of the different cross-layer design and protocol tuning approaches that may be used to meet a growing need to support inelastic soft real-time streams in MANETs. These streams are characterised by critical timing and throughput requirements and low packet loss tolerance levels. Many cross-layer approaches exist either for provision of QoS to soft real-time streams in static wireless networks or to improve the performance of real and non-real-time transmissions in MANETs. The common ground and lessons learned from these approaches, with a view to the potential provision of much needed support to real-time applications in MANETs, is therefore discussed

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    Autonomic management of virtualized resources in cloud computing

    Get PDF
    The last five years have witnessed a rapid growth of cloud computing in business, governmental and educational IT deployment. The success of cloud services depends critically on the effective management of virtualized resources. A key requirement of cloud management is the ability to dynamically match resource allocations to actual demands, To this end, we aim to design and implement a cloud resource management mechanism that manages underlying complexity, automates resource provisioning and controls client-perceived quality of service (QoS) while still achieving resource efficiency. The design of an automatic resource management centers on two questions: when to adjust resource allocations and how much to adjust. In a cloud, applications have different definitions on capacity and cloud dynamics makes it difficult to determine a static resource to performance relationship. In this dissertation, we have proposed a generic metric that measures application capacity, designed model-independent and adaptive approaches to manage resources and built a cloud management system scalable to a cluster of machines. To understand web system capacity, we propose to use a metric of productivity index (PI), which is defined as the ratio of yield to cost, to measure the system processing capability online. PI is a generic concept that can be applied to different levels to monitor system progress in order to identify if more capacity is needed. We applied the concept of PI to the problem of overload prevention in multi-tier websites. The overload predictor built on the PI metric shows more accurate and responsive overload prevention compared to conventional approaches. To address the issue of the lack of accurate server model, we propose a model-independent fuzzy control based approach for CPU allocation. For adaptive and stable control performance, we embed the controller with self-tuning output amplification and flexible rule selection. Finally, we build a QoS provisioning framework that supports multi-objective QoS control and service differentiation. Experiments on a virtual cluster with two service classes show the effectiveness of our approach in both performance and power control. To address the problems of complex interplay between resources and process delays in fine-grained multi-resource allocation, we consider capacity management as a decision-making problem and employ reinforcement learning (RL) to optimize the process. The optimization depends on the trial-and-error interactions with the cloud system. In order to improve the initial management performance, we propose a model-based RL algorithm. The neural network based environment model, which is learned from previous management history, generates simulated resource allocations for the RL agent. Experiment results on heterogeneous applications show that our approach makes efficient use of limited interactions and find near optimal resource configurations within 7 steps. Finally, we present a distributed reinforcement learning approach to the cluster-wide cloud resource management. We decompose the cluster-wide resource allocation problem into sub-problems concerning individual VM resource configurations. The cluster-wide allocation is optimized if individual VMs meet their SLA with a high resource utilization. For scalability, we develop an efficient reinforcement learning approach with continuous state space. For adaptability, we use VM low-level runtime statistics to accommodate workload dynamics. Prototyped in a iBalloon system, the distributed learning approach successfully manages 128 VMs on a 16-node close correlated cluster

    A cross-layer middleware architecture for time and safety critical applications in MANETs

    Get PDF
    Mobile Ad hoc Networks (MANETs) can be deployed instantaneously and adaptively, making them highly suitable to military, medical and disaster-response scenarios. Using real-time applications for provision of instantaneous and dependable communications, media streaming, and device control in these scenarios is a growing research field. Realising timing requirements in packet delivery is essential to safety-critical real-time applications that are both delay- and loss-sensitive. Safety of these applications is compromised by packet loss, both on the network and by the applications themselves that will drop packets exceeding delay bounds. However, the provision of this required Quality of Service (QoS) must overcome issues relating to the lack of reliable existing infrastructure, conservation of safety-certified functionality. It must also overcome issues relating to the layer-2 dynamics with causal factors including hidden transmitters and fading channels. This thesis proposes that bounded maximum delay and safety-critical application support can be achieved by using cross-layer middleware. Such an approach benefits from the use of established protocols without requiring modifications to safety-certified ones. This research proposes ROAM: a novel, adaptive and scalable cross-layer Real-time Optimising Ad hoc Middleware framework for the provision and maintenance of performance guarantees in self-configuring MANETs. The ROAM framework is designed to be scalable to new optimisers and MANET protocols and requires no modifications of protocol functionality. Four original contributions are proposed: (1) ROAM, a middleware entity abstracts information from the protocol stack using application programming interfaces (APIs) and that implements optimisers to monitor and autonomously tune conditions at protocol layers in response to dynamic network conditions. The cross-layer approach is MANET protocol generic, using minimal imposition on the protocol stack, without protocol modification requirements. (2) A horizontal handoff optimiser that responds to time-varying link quality to ensure optimal and most robust channel usage. (3) A distributed contention reduction optimiser that reduces channel contention and related delay, in response to detection of the presence of a hidden transmitter. (4) A feasibility evaluation of the ROAM architecture to bound maximum delay and jitter in a comprehensive range of ns2-MIRACLE simulation scenarios that demonstrate independence from the key causes of network dynamics: application setting and MANET configuration; including mobility or topology. Experimental results show that ROAM can constrain end-to-end delay, jitter and packet loss, to support real-time applications with critical timing requirements
    corecore