13 research outputs found

    PASHE: Privacy Aware Scheduling in a Heterogeneous Fog Environment

    Get PDF
    Fog computing extends the functionality of the traditional cloud data center (cdc) using micro data centers (mdcs) located at the edge of the network. These mdcs provide both computation and storage to applications. Their proximity to users makes them a viable option for executing jobs with tight deadlines and latency constraints. Moreover, it may be the case that these mdcs have diverse execution capacities, i.e. they have heterogeneous architectures. The implication for this is that tasks may have variable execution costs on different mdcs. We propose PASHE (Privacy Aware Scheduling in a Heterogeneous Fog Environment), an algorithm that schedules privacy constrained real-time jobs on heterogeneous mdcs and the cdc. Three categories of tasks have been considered: private, semi-private and public. Private tasks with tight deadlines are executed on the local mdc of users. Semi-private tasks with tight deadlines are executed on “preferred” remote mdcs. Public tasks with loose deadlines are sent to the cdc for execution. We also take account of user mobility across different mdcs. If the mobility pattern of users is predictable, PASHE reserves computation resources on remote mdcs for job execution. Simulation results show that PASHE offers superior performance versus other scheduling algorithms in a fog computing environment, taking account of mdc heterogeneity, user mobility and application security

    Real-time scheduling on Hierarchical Heterogeneous Fog Networks

    Get PDF
    Cloud computing is widely used to support offloaded data processing for applications. However, latency constrained data processing has requirements that may not always be suitable for cloud-based processing. Fog computing brings processing closer to data generation sources, by reducing propagation and data transfer delays. It is a viable alternative for processing tasks with real-time requirements. We propose a scheduling algorithm RTH2S (Real Time Heterogeneous Hierarchical Scheduling) for a set of real-time tasks on a heterogeneous integrated fog-cloud architecture. We consider a hierarchical model for fog nodes, with nodes at higher tiers having greater computational capacity than nodes at lower tiers, though with greater latency from data generation sources. Tasks with various profiles have been considered. For regular profile jobs, we use least laxity first (LLF) to find the preferred fog node for scheduling. For tagged profiles, based on tag values, the jobs are split in order to finish execution before the deadline, or the LLF heuristic is used. Using HPC2N workload traces across 3.5 years of activity, the real-time performance of RTH2S versus comparable algorithms is demonstrated. Our proposed approach is validated using both simulation (to demonstrate scale up) as well as a lab-based testbed

    An edge-cloud infrastructure for weed detection in precision agriculture

    Get PDF
    Accurate identification of weeds plays a crucial role in helping farmers achieve efficient agricultural practices. An edge-cloud infrastructure can provide efficient resources for weed detection in resource-constrained rural areas. However, deployed applications in these areas often face challenges such as connectivity failures and network issues that affect their quality of service (QoS). We introduce a signal quality-aware framework for precision agriculture that allocates weed inference tasks to resource nodes based on the current network connectivity and quality. Two Machine Learning (ML) models based on ResNet-50 and MobileNetV2 are trained using the publicly available DeepWeeds image classification dataset. A rule-based approximation algorithm is formulated to execute tasks on resource-constrained computational nodes. We also designed a testbed setup consisting of Raspberry Pi (RPi), personal laptop, cloud server and Parsl environment for evaluating the framework. Reliability of the framework is tested in a controlled setting, under various dynamically injected faults. Experimental results demonstrate that the proposed setup can accurately identify weeds while ensuring high fault tolerance and low completion time, making it a promising solution for weed management in rural agriculture

    Scheduling real time security aware tasks in fog networks

    Get PDF
    Fog computing brings the cloud closer to a user with the help of a micro data center (mdc), leading to lower response times for delay sensitive applications. RT-SANE (Real-Time Security Aware scheduling on the Network Edge) supports batch and interactive applications, taking account of their deadline and security constraints. RT-SANE chooses between an mdc (in proximity to a user) and a cloud data center (cdc) by taking account of network delay and security tags. Jobs submitted by a user are tagged as: private, semi-private and public, and mdcs and cdcs are classified as: trusted, semi-trusted and untrusted. RT-SANE executes private jobs on a user's local mdcs or pre-trusted cdcs, and semi-private and public jobs on remote mdcs and cdcs. A security and performance-aware distributed orchestration architecture and protocol is made use of in RT-SANE. For evaluation, workload traces from the CERIT-SC Cloud system are used. The effect of slow executing straggler jobs on the Fog framework are also considered, involving migration of such jobs. Experiments reveal that RT-SANE offers a higher success ratio (successfully completed jobs) to comparable algorithms, including consideration of security tags

    Performance analysis of Apache openwhisk across the edge-cloud continuum

    Get PDF
    Serverless computing offers opportunities for auto-scaling, a pay-for-use cost model, quicker deployment and faster updates to support computing services. Apache OpenWhisk is one such open-source, distributed serverless platform that can be used to execute user functions in a stateless manner. We conduct a performance analysis of OpenWhisk on an edge-cloud continuum, using a function chain of video analysis applications. We consider a combination of Raspberry Pi and cloud nodes to deploy OpenWhisk, modifying a number of parameters, such as maximum memory limit and runtime, to investigate application behaviours. The five main factors considered are: cold and warm activation, memory and input size, CPU architecture, runtime packages used, and concurrent invocations. The results have been evaluated using initialization, and execution time, minimum memory requirement, inference time and accuracy

    Improving the Schedulability of Real-Time Tasks using Fog Computing

    No full text

    9th International Conference on Advanced Computing & Communication Technologies

    No full text
    This book highlights a collection of high-quality peer-reviewed research papers presented at the Ninth International Conference on Advanced Computing & Communication Technologies (ICACCT-2015) held at Asia Pacific Institute of Information Technology, Panipat, India during 27–29 November 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academia and industry present their original work and exchange ideas, information, techniques and applications in the field of Advanced Computing and Communication Technology

    Incentivising resource sharing in edge computing applications

    No full text
    There is increasing realisation that edge devices, which are closer to a user, can play an important part in supporting latency and privacy sensitive applications. Such devices have also continued to increase in capability over recent years, ranging in complexity from embedded resources (e.g. Raspberry Pi, Arduino boards) placed alongside data capture devices to more complex “micro data centres”. Using such resources, a user is able to carry out task execution and data storage in proximity to their location, often making use of computing resources that can have varying ownership and access rights. Increasing performance requirements for stream processing applications (for instance), which incur delays between the client and the cloud have led to newer models of computation, which requires an application workflow to be split across data centre and edge resource capabilities. With recent emergence of edge/fog computing it has become possible to migrate services to micro-data centres and to address the performance limitations of traditional (centralised data centre) cloud based applications. Such migration can be represented as a cost function that involves incentives for micro-data centres to host services with associated quality of services and experience. Business models need to be developed for creating an open edge cloud environment where micro-data centres have the right incentives to support service hosting, and for large scale data centre operators to outsource service execution to such micro data centres. We describe potential revenue models for micro-data centers to support service migration and serve incoming requests for edge based applications. We present several cost models which involve combined use of edge devices and centralised data centres
    corecore