3 research outputs found

    Towards Distributed Mobile Computing

    Get PDF
    In the latest years, we observed an exponential growth of the market of the mobile devices. In this scenario, it assumes a particular relevance the rate at which mobile devices are replaced. According to the International Telecommunicaton Union in fact, smart-phone owners replace their device every 20 months, on average. The side effect of this trend is to deal with the disposal of an increasing amount of electronic devices which, in many cases, arestill working. We believe that it is feasible to recover such an unexploited computational power. Through a change of paradigm in fact, it is possible to achieve a two-fold objective: 1) extend the mobile devices lifetime, 2) enable a new opportunity to speed up mobile applications. In this paper we aim at providing a survey of state-of-art solutions aim at going in the direction of a Distributed Mobile Computing paradigm. We put in evidence the challenges to be addressed in order to implement this paradigm and we propose some possible future improvements

    Opportunistic CPU Sharing in Mobile Edge Computing Deploying the Cloud-RAN

    Get PDF
    Leveraging virtualization technology, Cloud-RAN deploys multiple virtual Base Band Units (vBBUs) along with collocated applications on the same Mobile Edge Computing (MEC) server. However, the performance of real-time (RT) applications such as the vBBU could potentially be impacted by sharing computing resources with collocated workloads. To address this challenge, this paper presents a dynamic CPU sharing mechanism, specifically designed for containerized virtualization in MEC servers, that hosts both RT and non-RT general-purpose applications. Initially, the CPU sharing problem in MEC servers is formulated as a Mixed-Integer Programming (MIP). Then, we present an algorithmic solution that breaks down the MIP into simpler subproblems that are then solved using efficient, constant factor heuristics. We assessed the performance of this mechanism against instances of a commercial solver. Further, via a small-scale testbed, we assessed various CPU sharing mechanisms and their effectiveness in reducing the impact of CPU sharing indicate that our CPU sharing mechanism reduces the worstcase execution time by more than 150% compared to the default host RT-Kernel approach. This evidence is strengthened when evaluating this mechanism within Cloud-RAN, in which vBBUs share resources with collocated applications on a MEC server. Using our CPU sharing approach, the vBBU’s scheduling latency decreases by up to 21% in comparison with the host RT-Kernel

    Runtime resource management for embedded and HPC systems

    No full text
    Resource management is a well known problem in almost every computing system ranging from embedded to High Performance Computing (HPC) and is useful to optimize multiple orthogonal system metrics such as power consumption, performance and reliability. To achieve such an optimization a resource manager must suitably allocate the available system resources - e.g. processing elements, memories and interconnect - to the running applications. This kind of process incurs in two main problems: a) system resources are usually shared between multiple applications and this induces resource contention; and b) each application requires a different Quality of Service, making it harder for the re- source manager to work in an application-agnostic mode. In this scenario, resource management represents a critical and essential component in a computing system and should act at different levels to optimize the whole system while keeping it exible and versatile. In this paper we describe a multi-layer resource management strategy that operates at application, operating system and hardware level and tries to optimize resource allocation on embedded, desktop multi-core and HPC systems
    corecore