1,448 research outputs found

    Optimal Control of Parallel Queues for Managing Volunteer Convergence

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163497/2/poms13224.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163497/1/poms13224_am.pd

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Threshold Routing to Trade-off Waiting and Call Resolution in Call Centers

    Get PDF
    In a call center, agents may handle calls at different speeds, and also may be more or less successful at resolving customers’ inquiries, even when only considering customers calling with similar requests. One common measure of successful call resolution is whether or not the call results in the customer calling back. This presents a natural trade-off between speed and quality, where speed is defined as the average time before an incoming call is answered (the average waiting time) and quality is defined as the percentage of all arriving calls that do not result in callbacks (the call resolution). The relevant control is the routing; that is, the decision concerning which agent should handle an arriving call when more than one agent is available. In an inverted-V model setting, we formulate an optimization problem with the dual performance objective of minimizing average customer waiting time and maximizing the call resolution. We solve this optimization problem asymptotically in the Halfin-Whitt many-server limit regime, interpret its solution as a routing control for the discrete-event system, and show via simulation that the interpreted routing control is on the efficient frontier. In particular, any routing control that has a lower average waiting time (higher call resolution) must also have a lower call resolution (higher average waiting time)
    • 

    corecore