143 research outputs found

    On distributed mobile edge computing

    Get PDF
    Mobile Cloud Computing (MCC) has been proposed to offload the workloads of mobile applications from mobile devices to the cloud in order to not only reduce energy consumption of mobile devices but also accelerate the execution of mobile applications. Owing to the long End-to-End (E2E) delay between mobile devices and the cloud, offloading the workloads of many interactive mobile applications to the cloud may not be suitable. That is, these mobile applications require a huge amount of computing resources to process their workloads as well as a low E2E delay between mobile devices and computing resources, which cannot be satisfied by the current MCC technology. In order to reduce the E2E delay, a novel cloudlet network architecture is proposed to bring the computing and storage resources from the remote cloud to the mobile edge. In the cloudlet network, each mobile user is associated with a specific Avatar (i.e., a dedicated Virtual Machine (VM) providing computing and storage resources to its mobile user) in the nearby cloudlet via its associated Base Station (BS). Thus, mobile users can offload their workloads to their Avatars with low E2E delay (i.e., one wireless hop). However, mobile users may roam among BSs in the mobile network, and so the E2E delay between mobile users and their Avatars may become worse if the Avatars remain in their original cloudlets. Thus, Avatar handoff is proposed to migrate an Avatar from one cloudlet into another to reduce the E2E delay between the Avatar and its mobile user. The LatEncy aware Avatar handDoff (LEAD) algorithm is designed to determine the location of each mobile user\u27s Avatar in each time slot in order to minimize the average E2E delay among all the mobile users and their Avatars. The performance of LEAD is demonstrated via extensive simulations. The cloudlet network architecture not only facilitates mobile users in offloading their computational tasks but also empowers Internet of Things (IoT). Popular IoT resources are proposed to be cached in nearby brokers, which are considered as application layer middleware nodes hosted by cloudlets in the cloudlet network, to reduce the energy consumption of servers. In addition, an Energy Aware and latency guaranteed dynamic reSourcE caching (EASE) strategy is proposed to enable each broker to cache suitable popular resources such that the energy consumption from the servers is minimized and the average delay of delivering the contents of the resources to the corresponding clients is guaranteed. The performance of EASE is demonstrated via extensive simulations. The future work comprises two parts. First, caching popular IoT resources in nearby brokers may incur unbalanced traffic loads among brokers, thus increasing the average delay of delivering the contents of the resources. Thus, how to balance the traffic loads among brokers to speed up IoT content delivery process requires further investigation. Second, drone assisted mobile access network architecture will be briefly investigated to accelerate communications between mobile users and their Avatars

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead

    Cloudlet computing : recent advances, taxonomy, and challenges

    Get PDF
    A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE

    Using a Real-Time Object Detection Application to Illustrate Effectiveness of Offloading and Prefetching in Cloudlet Architecture

    Get PDF
    In this thesis, we designed and implemented two versions of a real-time object de- tection application: A stand alone version and a cloud version. Through applying the application to a cloudlet environment, we are able to perform experiments and uses the results to illustrate the potential improvement that a cloudlet architecture can bring to mobile applications that require access to large amounts of cloud data or intensive com- putation. Potential improvements include data access speed, reduced CPU and memory usages as well as reduced battery consumption on mobile devices

    Towards Mobile Edge Computing: Taxonomy, Challenges, Applications and Future Realms

    Get PDF
    The realm of cloud computing has revolutionized access to cloud resources and their utilization and applications over the Internet. However, deploying cloud computing for delay critical applications and reducing the delay in access to the resources are challenging. The Mobile Edge Computing (MEC) paradigm is one of the effective solutions, which brings the cloud computing services to the proximity of the edge network and leverages the available resources. This paper presents a survey of the latest and state-of-the-art algorithms, techniques, and concepts of MEC. The proposed work is unique in that the most novel algorithms are considered, which are not considered by the existing surveys. Moreover, the chosen novel literature of the existing researchers is classified in terms of performance metrics by describing the realms of promising performance and the regions where the margin of improvement exists for future investigation for the future researchers. This also eases the choice of a particular algorithm for a particular application. As compared to the existing surveys, the bibliometric overview is provided, which is further helpful for the researchers, engineers, and scientists for a thorough insight, application selection, and future consideration for improvement. In addition, applications related to the MEC platform are presented. Open research challenges, future directions, and lessons learned in area of the MEC are provided for further future investigation

    Fog computing security: a review of current applications and security solutions

    Get PDF
    Fog computing is a new paradigm that extends the Cloud platform model by providing computing resources on the edges of a network. It can be described as a cloud-like platform having similar data, computation, storage and application services, but is fundamentally different in that it is decentralized. In addition, Fog systems are capable of processing large amounts of data locally, operate on-premise, are fully portable, and can be installed on heterogeneous hardware. These features make the Fog platform highly suitable for time and location-sensitive applications. For example, Internet of Things (IoT) devices are required to quickly process a large amount of data. This wide range of functionality driven applications intensifies many security issues regarding data, virtualization, segregation, network, malware and monitoring. This paper surveys existing literature on Fog computing applications to identify common security gaps. Similar technologies like Edge computing, Cloudlets and Micro-data centres have also been included to provide a holistic review process. The majority of Fog applications are motivated by the desire for functionality and end-user requirements, while the security aspects are often ignored or considered as an afterthought. This paper also determines the impact of those security issues and possible solutions, providing future security-relevant directions to those responsible for designing, developing, and maintaining Fog systems
    • …
    corecore