34 research outputs found

    Analysis of cross platform power data governance

    Get PDF
    With the rapid development of smart grid, how to solve the problems of professional business cooperation and information sharing, long data input time, accurate data, weak real-time, data extraction, redundant storage, low quality, privacy protection, further comprehensive management of data, mining the value of data resources has become one of the important tasks for the development of electric power enterprises. The traditional method uses edge computing for data transmission and task allocation. On this basis, we study the cross-platform power governance scheme based on edge unloading computing and deep reinforcement learning. The fi nal experimental results show that the scheme has smaller delay and lower energy consumption

    Scientific Image Restoration Anywhere

    Full text link
    The use of deep learning models within scientific experimental facilities frequently requires low-latency inference, so that, for example, quality control operations can be performed while data are being collected. Edge computing devices can be useful in this context, as their low cost and compact form factor permit them to be co-located with the experimental apparatus. Can such devices, with their limited resources, can perform neural network feed-forward computations efficiently and effectively? We explore this question by evaluating the performance and accuracy of a scientific image restoration model, for which both model input and output are images, on edge computing devices. Specifically, we evaluate deployments of TomoGAN, an image-denoising model based on generative adversarial networks developed for low-dose x-ray imaging, on the Google Edge TPU and NVIDIA Jetson. We adapt TomoGAN for edge execution, evaluate model inference performance, and propose methods to address the accuracy drop caused by model quantization. We show that these edge computing devices can deliver accuracy comparable to that of a full-fledged CPU or GPU model, at speeds that are more than adequate for use in the intended deployments, denoising a 1024 x 1024 image in less than a second. Our experiments also show that the Edge TPU models can provide 3x faster inference response than a CPU-based model and 1.5x faster than an edge GPU-based model. This combination of high speed and low cost permits image restoration anywhere.Comment: 6 pages, 8 figures, 1 tabl

    Mobile Cloud Computing as Mobile offloading Solution: Frameworks, Focus and Implementation Challenges

    Get PDF
    Mobile devices have been operating under limited resource capacity including memory, processing speed, storage and battery life. Due to the advancement of technology, complex applications have been designed and implemented, and these application demand computing device with high capacity. Cloud computing emerged as solution for computing device with limited capacity. However, integrating mobile operating environment with cloud computing has been a challenge due to dynamicity of mobile device environment including unreliability of wireless communication. This paper reviews recent studies in Mobile Cloud Computing to assess its implementation as one of task offloading solution for mobile device. The study reviewed common framework that are used to implement Mobile Cloud Computing, the focus of the current studies and finally highlighted open issues that need to be addressed when implementing optimal Mobile Cloud Computing.Keywords: Mobile Cloud Computing, task offloading, Mobile device, cloudletDOI: 10.7176/CEIS/11-5-04Publication date:September 30th 202

    Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks

    Full text link
    The proliferation of innovative mobile services such as augmented reality, networked gaming, and autonomous driving has spurred a growing need for low-latency access to computing resources that cannot be met solely by existing centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an effective solution to meet the demand for low-latency services by enabling the execution of computing tasks at the network-periphery, in proximity to end-users. While a number of recent studies have addressed the problem of determining the execution of service tasks and the routing of user requests to corresponding edge servers, the focus has primarily been on the efficient utilization of computing resources, neglecting the fact that non-trivial amounts of data need to be stored to enable service execution, and that many emerging services exhibit asymmetric bandwidth requirements. To fill this gap, we study the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints. We show that this problem generalizes several problems in literature and propose an algorithm that achieves close-to-optimal performance using randomized rounding. Evaluation results demonstrate that our approach can effectively utilize the available resources to maximize the number of requests served by low-latency edge cloud servers.Comment: IEEE Infocom 201
    corecore