3,779 research outputs found

    Multiple Workflows Scheduling in Multi-tenant Distributed Systems: A Taxonomy and Future Directions

    Full text link
    The workflow is a general notion representing the automated processes along with the flow of data. The automation ensures the processes being executed in the order. Therefore, this feature attracts users from various background to build the workflow. However, the computational requirements are enormous and investing for a dedicated infrastructure for these workflows is not always feasible. To cater to the broader needs, multi-tenant platforms for executing workflows were began to be built. In this paper, we identify the problems and challenges in the multiple workflows scheduling that adhere to the platforms. We present a detailed taxonomy from the existing solutions on scheduling and resource provisioning aspects followed by the survey of relevant works in this area. We open up the problems and challenges to shove up the research on multiple workflows scheduling in multi-tenant distributed systems.Comment: Several changes has been done based on reviewers' comments after first round review. This is a pre-print for paper (currently under second round review) submitted to ACM Computing Survey

    Characterizing Application Scheduling on Edge, Fog and Cloud Computing Resources

    Full text link
    Cloud computing has grown to become a popular distributed computing service offered by commercial providers. More recently, Edge and Fog computing resources have emerged on the wide-area network as part of Internet of Things (IoT) deployments. These three resource abstraction layers are complementary, and provide distinctive benefits. Scheduling applications on clouds has been an active area of research, with workflow and dataflow models serving as a flexible abstraction to specify applications for execution. However, the application programming and scheduling models for edge and fog are still maturing, and can benefit from learnings on cloud resources. At the same time, there is also value in using these resources cohesively for application execution. In this article, we present a taxonomy of concepts essential for specifying and solving the problem of scheduling applications on edge, for and cloud computing resources. We first characterize the resource capabilities and limitations of these infrastructure, and design a taxonomy of application models, Quality of Service (QoS) constraints and goals, and scheduling techniques, based on a literature review. We also tabulate key research prototypes and papers using this taxonomy. This survey benefits developers and researchers on these distributed resources in designing and categorizing their applications, selecting the relevant computing abstraction(s), and developing or selecting the appropriate scheduling algorithm. It also highlights gaps in literature where open problems remain.Comment: Pre-print of journal article: Varshney P, Simmhan Y. Characterizing application scheduling on edge, fog, and cloud computing resources. Softw: Pract Exper. 2019; 1--37. https://doi.org/10.1002/spe.269

    Container-based Cluster Orchestration Systems: A Taxonomy and Future Directions

    Full text link
    Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better application management and deployment in addition to hardware virtualization. They are being widely used by organizations to deploy their increasingly diverse workloads derived from modern-day applications such as web services, big data, and IoT in either proprietary clusters or private and public cloud data centers. This has led to the emergence of container orchestration platforms, which are designed to manage the deployment of containerized applications in large-scale clusters. These systems are capable of running hundreds of thousands of jobs across thousands of machines. To do so efficiently, they must address several important challenges including scalability, fault-tolerance and availability, efficient resource utilization, and request throughput maximization among others. This paper studies these management systems and proposes a taxonomy that identifies different mechanisms that can be used to meet the aforementioned challenges. The proposed classification is then applied to various state-of-the-art systems leading to the identification of open research challenges and gaps in the literature intended as future directions for researchers

    Reconfigurable Wireless Networks

    Full text link
    Driven by the advent of sophisticated and ubiquitous applications, and the ever-growing need for information, wireless networks are without a doubt steadily evolving into profoundly more complex and dynamic systems. The user demands are progressively rampant, while application requirements continue to expand in both range and diversity. Future wireless networks, therefore, must be equipped with the ability to handle numerous, albeit challenging requirements. Network reconfiguration, considered as a prominent network paradigm, is envisioned to play a key role in leveraging future network performance and considerably advancing current user experiences. This paper presents a comprehensive overview of reconfigurable wireless networks and an in-depth analysis of reconfiguration at all layers of the protocol stack. Such networks characteristically possess the ability to reconfigure and adapt their hardware and software components and architectures, thus enabling flexible delivery of broad services, as well as sustaining robust operation under highly dynamic conditions. The paper offers a unifying framework for research in reconfigurable wireless networks. This should provide the reader with a holistic view of concepts, methods, and strategies in reconfigurable wireless networks. Focus is given to reconfigurable systems in relatively new and emerging research areas such as cognitive radio networks, cross-layer reconfiguration and software-defined networks. In addition, modern networks have to be intelligent and capable of self-organization. Thus, this paper discusses the concept of network intelligence as a means to enable reconfiguration in highly complex and dynamic networks. Finally, the paper is supported with several examples and case studies showing the tremendous impact of reconfiguration on wireless networks.Comment: 28 pages, 26 figures; Submitted to the Proceedings of the IEEE (a special issue on Reconfigurable Systems

    A Task Allocation Schema Based on Response Time Optimization in Cloud Computing

    Full text link
    Cloud computing is a newly emerging distributed computing which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes so that the tasks can get a balanced allocation or each task's execution cost decreases to the minimum or the overall system performance is optimal. Unlike the previous task slices' sequential execution of an independent task in the model of which the target is processing time, we build a model that targets at the response time, in which the task slices are executed in parallel. Then we give its solution with a method based on an improved adjusting entropy function. At last, we design a new task scheduling algorithm. Experimental results show that the response time of our proposed algorithm is much lower than the game-theoretic algorithm and balanced scheduling algorithm and compared with the balanced scheduling algorithm, game-theoretic algorithm is not necessarily superior in parallel although its objective function value is better.Comment: arXiv admin note: substantial text overlap with arXiv:1403.501

    Fog Computing: A Taxonomy, Survey and Future Directions

    Full text link
    In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research

    Energy and Information Management of Electric Vehicular Network: A Survey

    Full text link
    The connected vehicle paradigm empowers vehicles with the capability to communicate with neighboring vehicles and infrastructure, shifting the role of vehicles from a transportation tool to an intelligent service platform. Meanwhile, the transportation electrification pushes forward the electric vehicle (EV) commercialization to reduce the greenhouse gas emission by petroleum combustion. The unstoppable trends of connected vehicle and EVs transform the traditional vehicular system to an electric vehicular network (EVN), a clean, mobile, and safe system. However, due to the mobility and heterogeneity of the EVN, improper management of the network could result in charging overload and data congestion. Thus, energy and information management of the EVN should be carefully studied. In this paper, we provide a comprehensive survey on the deployment and management of EVN considering all three aspects of energy flow, data communication, and computation. We first introduce the management framework of EVN. Then, research works on the EV aggregator (AG) deployment are reviewed to provide energy and information infrastructure for the EVN. Based on the deployed AGs, we present the research work review on EV scheduling that includes both charging and vehicle-to-grid (V2G) scheduling. Moreover, related works on information communication and computing are surveyed under each scenario. Finally, we discuss open research issues in the EVN

    Mobile Edge Cloud: Opportunities and Challenges

    Full text link
    Mobile edge cloud is emerging as a promising technology to the internet of things and cyber-physical system applications such as smart home and intelligent video surveillance. In a smart home, various sensors are deployed to monitor the home environment and physiological health of individuals. The data collected by sensors are sent to an application, where numerous algorithms for emotion and sentiment detection, activity recognition and situation management are applied to provide healthcare- and emergency-related services and to manage resources at the home. The executions of these algorithms require a vast amount of computing and storage resources. To address the issue, the conventional approach is to send the collected data to an application on an internet cloud. This approach has several problems such as high communication latency, communication energy consumption and unnecessary data traffic to the core network. To overcome the drawbacks of the conventional cloud-based approach, a new system called mobile edge cloud is proposed. In mobile edge cloud, multiple mobiles and stationary devices interconnected through wireless local area networks are combined to create a small cloud infrastructure at a local physical area such as a home. Compared to traditional mobile distributed computing systems, mobile edge cloud introduces several complex challenges due to the heterogeneous computing environment, heterogeneous and dynamic network environment, node mobility, and limited battery power. The real-time requirements associated with the internet of things and cyber-physical system applications make the problem even more challenging. In this paper, we describe the applications and challenges associated with the design and development of mobile edge cloud system and propose an architecture based on a cross layer design approach for effective decision making.Comment: 4th Annual Conference on Computational Science and Computational Intelligence, December 14-16, 2017, Las Vegas, Nevada, USA. arXiv admin note: text overlap with arXiv:1810.0704

    Aneka: A Software Platform for .NET-based Cloud Computing

    Full text link
    Aneka is a platform for deploying Clouds developing applications on top of it. It provides a runtime environment and a set of APIs that allow developers to build .NET applications that leverage their computation on either public or private clouds. One of the key features of Aneka is the ability of supporting multiple programming models that are ways of expressing the execution logic of applications by using specific abstractions. This is accomplished by creating a customizable and extensible service oriented runtime environment represented by a collection of software containers connected together. By leveraging on these architecture advanced services including resource reservation, persistence, storage management, security, and performance monitoring have been implemented. On top of this infrastructure different programming models can be plugged to provide support for different scenarios as demonstrated by the engineering, life science, and industry applications.Comment: 30 pages, 10 figure

    Cloud Service ranking using Checkpoint based Load balancing in real time scheduling of Cloud Computing

    Full text link
    Cloud computing has been gaining popularity in the recent years. Several studies are being proceeded to build cloud applications with exquisite quality based on users demands. In achieving the same, one of the applied criteria is checkpoint based load balancing in real time scheduling through which suitable cloud service is chosen from a group of cloud services candidates. Valuable information can be collected to rank the services within this checkpoint based load balancing. In order to attain ranking, different services are needed to be invoked in the cloud, which is time consuming and wastage of services invocation. To avoid the same, this chapter proposes an algorithm for predicting the ranks of different cloud services by using the values from previously offered services
    • …
    corecore