3 research outputs found

    An Efficient Monte Carlo-based Probabilistic Time-Dependent Routing Calculation Targeting a Server-Side Car Navigation System

    Full text link
    Incorporating speed probability distribution to the computation of the route planning in car navigation systems guarantees more accurate and precise responses. In this paper, we propose a novel approach for dynamically selecting the number of samples used for the Monte Carlo simulation to solve the Probabilistic Time-Dependent Routing (PTDR) problem, thus improving the computation efficiency. The proposed method is used to determine in a proactive manner the number of simulations to be done to extract the travel-time estimation for each specific request while respecting an error threshold as output quality level. The methodology requires a reduced effort on the application development side. We adopted an aspect-oriented programming language (LARA) together with a flexible dynamic autotuning library (mARGOt) respectively to instrument the code and to take tuning decisions on the number of samples improving the execution efficiency. Experimental results demonstrate that the proposed adaptive approach saves a large fraction of simulations (between 36% and 81%) with respect to a static approach while considering different traffic situations, paths and error requirements. Given the negligible runtime overhead of the proposed approach, it results in an execution-time speedup between 1.5x and 5.1x. This speedup is reflected at infrastructure-level in terms of a reduction of around 36% of the computing resources needed to support the whole navigation pipeline

    Consolidation and replication of VMs matching performance objectives

    No full text
    The users of actual computing infrastructures allowing the resource provision (such as clouds) are often asked to decide about the proper amount of equipment (virtual machines, VMs) required to execute their requests while satisfying a set of performance objectives. These types of decisions are particularly difficult since the direct correlation between the resources allocated and the performance offered is influenced by a number of factors such as the characteristic of the different class of requests, the capacity of the resources, the workload sharing the same physical hardware, the dynamic variation of the mix of requests of the different classes in concurrent execution. In this paper we derive the impact on several performance indexes by two popular techniques, namely, consolidation and replication, adopted in virtual computing infrastructures. In particular we present an analytical model to determine the best consolidation or replication options that matches given performance objectives specified through a set of constraints
    corecore