13,282 research outputs found

    Addressing Application Latency Requirements through Edge Scheduling

    Get PDF
    Abstract Latency-sensitive and data-intensive applications, such as IoT or mobile services, are leveraged by Edge computing, which extends the cloud ecosystem with distributed computational resources in proximity to data providers and consumers. This brings significant benefits in terms of lower latency and higher bandwidth. However, by definition, edge computing has limited resources with respect to cloud counterparts; thus, there exists a trade-off between proximity to users and resource utilization. Moreover, service availability is a significant concern at the edge of the network, where extensive support systems as in cloud data centers are not usually present. To overcome these limitations, we propose a score-based edge service scheduling algorithm that evaluates network, compute, and reliability capabilities of edge nodes. The algorithm outputs the maximum scoring mapping between resources and services with regard to four critical aspects of service quality. Our simulation-based experiments on live video streaming services demonstrate significant improvements in both network delay and service time. Moreover, we compare edge computing with cloud computing and content delivery networks within the context of latency-sensitive and data-intensive applications. The results suggest that our edge-based scheduling algorithm is a viable solution for high service quality and responsiveness in deploying such applications

    Zenith: Utility-Aware Resource Allocation for Edge Computing

    Get PDF
    In the Internet of Things(IoT) era, the demands for low-latency computing for time-sensitive applications (e.g., location-based augmented reality games, real-time smart grid management, real-time navigation using wearables) has been growing rapidly. Edge Computing provides an additional layer of infrastructure to fill latency gaps between the IoT devices and the back-end computing infrastructure. In the edge computing model, small-scale micro-datacenters that represent ad-hoc and distributed collection of computing infrastructure pose new challenges in terms of management and effective resource sharing to achieve a globally efficient resource allocation. In this paper, we propose Zenith, a novel model for allocating computing resources in an edge computing platform that allows service providers to establish resource sharing contracts with edge infrastructure providers apriori. Based on the established contracts, service providers employ a latency-aware scheduling and resource provisioning algorithm that enables tasks to complete and meet their latency requirements. The proposed techniques are evaluated through extensive experiments that demonstrate the effectiveness, scalability and performance efficiency of the proposed model

    Resource provisioning and scheduling algorithms for hybrid workflows in edge cloud computing

    Get PDF
    In recent years, Internet of Things (IoT) technology has been involved in a wide range of application domains to provide real-time monitoring, tracking and analysis services. The worldwide number of IoT-connected devices is projected to increase to 43 billion by 2023, and IoT technologies are expected to engaged in 25% of business sector. Latency-sensitive applications in scope of intelligent video surveillance, smart home, autonomous vehicle, augmented reality, are all emergent research directions in industry and academia. These applications are required connecting large number of sensing devices to attain the desired level of service quality for decision accuracy in a sensitive timely manner. Moreover, continuous data stream imposes processing large amounts of data, which adds a huge overhead on computing and network resources. Thus, latency-sensitive and resource-intensive applications introduce new challenges for current computing models, i.e, batch and stream. In this thesis, we refer to the integrated application model of stream and batch applications as a hybrid work ow model. The main challenge of the hybrid model is achieving the quality of service (QoS) requirements of the two computation systems. This thesis provides a systemic and detailed modeling for hybrid workflows which describes the internal structure of each application type for purposes of resource estimation, model systems tuning, and cost modeling. For optimizing the execution of hybrid workflows, this thesis proposes algorithms, techniques and frameworks to serve resource provisioning and task scheduling on various computing systems including cloud, edge cloud and cooperative edge cloud. Overall, experimental results provided in this thesis demonstrated strong evidences on the responsibility of proposing different understanding and vision on the applications of integrating stream and batch applications, and how edge computing and other emergent technologies like 5G networks and IoT will contribute on more sophisticated and intelligent solutions in many life disciplines for more safe, secure, healthy, smart and sustainable society

    Network-Aware Task Scheduling for Edge Computing

    Get PDF
    Edge computing promises low-latency computation by moving data processing closer to the source. Tasks executed at the edge of the network have seen a significant increase in their complexity. The demand for low-latency computation for delay-sensitive applications at the edge is also increasing. To meet the computational demand, task offloading has become a go-to solution where the edge devices offload tasks in part or whole to the edge servers via the network. But the performance fluctuations of the network largely influence the data transfer performance between edge devices and the edge servers, which negatively impacts the overall task execution performance. Hence, monitoring the state of the network is desirable to improve the performance of task offloading at the edge. However, networks are usually dynamic and unpredictable in nature, particularly when the network is being used by multiple other devices and applications simultaneously, resulting in data flows competing with each other for the resources. In this study, we are leveraging In-­band Network Telemetry (INT) to collect fine-grained network information to introduce network awareness in task scheduling for edge computing. Legacy methods of network monitoring that rely on flow-level and port-level statistics are often limited by their collection frequency which is typically in the order of tens of seconds. In contrast, INT can improve the collection frequency by working at the line rate and granularity of information by capturing network telemetry at packet-level directly from the data plane. Such capabilities enable the detection of subtle changes and congestion events in the network, thereby increasing the network visibility while making it more accurate. We implemented a network-aware task scheduler for edge computing that uses high-precision network telemetry for task scheduling. We experimented with different workloads under various congestion scenarios to assess the impact of our network-aware scheduler on the task offloading performance. We observed up to 40% reduction in data transfer time and up to 30% reduction in the overall task execution time by favoring edge servers in uncongested or relatively less congested areas of the network when scheduling the tasks. Our study shows that network visibility is an important factor that can improve task offloading performance. The results so obtained supports our motivation to use INT for obtaining fine-grained high-precision network telemetry to create a network-aware task scheduler for edge computing
    corecore