341 research outputs found

    Efficient Task Scheduling and Fair Load Distribution Among Federated Clouds

    Get PDF
    The federated cloud is the future generation of cloud computing, allowing sharing of computing and storage resources, and servicing of user tasks among cloud providers through a centralized control mechanism. However, a great challenge lies in the efficient management of such federated clouds and fair distribution of the load among heterogeneous cloud providers. In our proposed approach, called QPFS_MASG, at the federated cloud level, the incoming tasks queue are partitioned in order to achieve a fair distribution of the load among all cloud providers of the federated cloud. Then, at the cloud level, task scheduling using the Modified Activity Selection by Greedy (MASG) technique assigns the tasks to different virtual machines (VMs), considering the task deadline as the key factor in achieving good quality of service (QoS). The proposed approach takes care of servicing tasks within their deadline, reducing service level agreement (SLA) violations, improving the response time of user tasks as well as achieving fair distribution of the load among all participating cloud providers. The QPFS_MASG was implemented using CloudSim and the evaluation result revealed a guaranteed degree of fairness in service distribution among the cloud providers with reduced response time and SLA violations compared to existing approaches. Also, the evaluation results showed that the proposed approach serviced the user tasks with minimum number of VMs

    Resource management in the cloud: An end-to-end Approach

    Get PDF
    Philosophiae Doctor - PhDCloud Computing enables users achieve ubiquitous on-demand , and convenient access to a variety of shared computing resources, such as serves network, storage ,applications and more. As a business model, Cloud Computing has been openly welcomed by users and has become one of the research hotspots in the field of information and communication technology. This is because it provides users with on-demand customization and pay-per-use resource acquisition methods

    AI Driven Heterogeneous MEC System with UAV Assistance for Dynamic Environment: Challenges and Solutions

    Get PDF
    By taking full advantage of Computing, Communication and Caching (3C) resources at the network edge, Mobile Edge Computing (MEC) is envisioned as one of the key enablers for next generation networks. However, current fixed-lo-cation MEC architecture may not be able to make real-time decision in dynamic environments, especially in large-scale scenarios. To address this issue, in this article, a Heterogeneous MEC (H-MEC) architecture is proposed, which is composed of fixed unit, i.e., Ground Stations (GSs) as well as moving nodes, i.e., Ground Vehicles (GVs) and Unmanned Aerial Vehicles (UAVs), all with 3C resource enabled. The key challenges in H-MEC, i.e., mobile edge node management, real-time decision making, user association and resource allocation along with the possible Artificial Intelligence (AI)-based solutions, are discussed. In addition, the AI-based joint Resource schEduling (ARE) framework with two different AI-based mechanisms, i.e., Deep neural network (DNN)-based and deep reinforcement learning (DRL)-based architectures, are proposed. DNN-based solution with online incremental learning applies the global optimizer and therefore has better performance than the DRL-based architecture with online policy updating, but requires longer training time. The simulation results are given to verify the efficiency of our proposed ARE framework

    IEEE Access special section editorial: Mission critical public-safety communications: architectures, enabling technologies, and future applications

    Get PDF
    Disaster management organizations such as fire brigades, rescue teams, and emergency medical service providers have a high priority demand to communicate with each other and with the victims by using mission-critical voice and data communications [item 1) in the Appendix]. In recent years, public safety agencies and organizations have started planning to evolve their existing land mobile radio system (LMRS) with long-term evolution (LTE)-based public safety solutions which provides broadband, ubiquitous, and mission-critical voice and data services. LTE provides high bandwidth and low latency services to the customers using internet protocol-based LTE network. Since mission critical communication services have different demands and priorities for dynamically varying situations for disaster-hit areas, the architecture and the communication technologies of the existing LTE networks need to be upgraded with a system that has the capability to respond efficiently and in a timely manner during critical situations

    NASLMRP: Design of a Negotiation Aware Service Level Agreement Model for Resource Provisioning in Cloud Environments

    Get PDF
    Cloud resource provisioning requires examining tasks, dependencies, deadlines, and capacity distribution. Scalability is hindered by incomplete or complex models. Comprehensive models with low-to-moderate QoS are unsuitable for real-time scenarios. This research proposes a Negotiation Aware SLA Model for Resource Provisioning in cloud deployments to address these challenges. In the proposed model, a task-level SLA maximizes resource allocation fairness and incorporates task dependency for correlated task types. This process's new tasks are processed by an efficient hierarchical task clustering process. Priority is assigned to each task. For efficient provisioning, an Elephant Herding Optimization (EHO) model allocates resources to these clusters based on task deadline and make-span levels. The EHO Model suggests a fitness function that shortens the make-span and raises deadline awareness. Q-Learning is used in the VM-aware negotiation framework for capacity tuning and task-shifting to post-process allocated tasks for faster task execution with minimal overhead. Because of these operations, the proposed model outperforms state-of-the-art models in heterogeneous cloud configurations and across multiple task types. The proposed model outperformed existing models in terms of make-span, deadline hit ratio, 9.2% lower computational cycles, 4.9% lower energy consumption, and 5.4% lower computational complexity, making it suitable for large-scale, real-time task scheduling

    Economic impact of energy saving techniques in cloud server

    Get PDF
    In recent years, lot of research has been carried in the field of cloud computing and distributed systems to investigate and understand their performance. Economic impact of energy consumption is of major concern for major companies. Cloud Computing companies (Google, Yahoo, Gaikai, ONLIVE, Amazon and eBay) use large data centers which are comprised of virtual computers that are placed globally and require a lot of power cost to maintain. Demand for energy consumption is increasing day by day in IT firms. Therefore, Cloud Computing companies face challenges towards the economic impact in terms of power costs. Energy consumption is dependent upon several factors, e.g., service level agreement, virtual machine selection techniques, optimization policies, workload types etc. We address a solution for the energy saving problem by enabling dynamic voltage and frequency scaling technique for gaming data centers. The dynamic voltage and frequency scaling technique is compared against non-power aware and static threshold detection techniques. This helps service providers to meet the quality of service and quality of experience constraints by meeting service level agreements. The CloudSim platform is used for implementation of the scenario in which game traces are used as a workload for testing the technique. Selection of better techniques can help gaming servers to save energy cost and maintain a better quality of service for users placed globally. The novelty of the work provides an opportunity to investigate which technique behaves better, i.e., dynamic, static or non-power aware. The results demonstrate that less energy is consumed by implementing a dynamic voltage and frequency approach in comparison with static threshold consolidation or non-power aware technique. Therefore, more economical quality of services could be provided to the end users

    Resource scheduling in edge computing IoT networks using hybrid deep learning algorithm

    Get PDF
    The proliferation of the Internet of Things (IoT) and wireless sensor networks enhances data communication. The demand for data communication rapidly increases, which calls the emerging edge computing paradigm. Edge computing plays a major role in IoT networks and provides computing resources close to the users. Moving the services from the cloud to users increases the communication, storage, and network features of the users. However, massive IoT networks require a large spectrum of resources for their computations. In order to attain this, resource scheduling algorithms are employed in edge computing. Statistical and machine learning-based resource scheduling algorithms have evolved in the past decade, but the performance can be improved if resource requirements are analyzed further. A deep learning-based resource scheduling in edge computing IoT networks is presented in this research work using deep bidirectional recurrent neural network (BRNN) and convolutional neural network algorithms. Before scheduling, the IoT users are categorized into clusters using a spectral clustering algorithm. The proposed model simulation analysis verifies the performance in terms of delay, response time, execution time, and resource utilization. Existing resource scheduling algorithms like a genetic algorithm (GA), Improved Particle Swarm Optimization (IPSO), and LSTM-based models are compared with the proposed model to validate the superior performances.Поширення Інтернету речей (IoT) і бездротових сенсорних мереж покращує передачу даних. Попит на передачу даних швидко зростає, що ви- кликає появу парадигми периферійних обчислень. Граничні обчислення віді- грають важливу роль у мережах IoT і надають обчислювальні ресурси поблизу користувачів. Перенесення служб із хмари до користувачів розширює комуні- каційні, сховища та мережеві функції користувачів. Однак масивні мережі IoT потребують великого обсягу ресурсів для своїх обчислень. Щоб досягти цього, у граничних обчисленнях використовуються алгоритми планування ресурсів. Алгоритми планування ресурсів, засновані на статистиці та машинному на- вчанні, розвинулися протягом останнього десятиліття, але їх продуктивність можна покращити, якщо додатково проаналізувати вимоги до ресурсів. У ро- боті подано глибоке планування ресурсів на основі навчання в периферійних обчислювальних мережах IoT з використанням глибокої двонаправленої реку- рентної нейронної мережі (BRNN) і алгоритмів згорткової нейронної мережі. Перед плануванням користувачі IoT класифікуються в різні кластери за допо- могою спектрального алгоритму кластеризації. Пропонований аналіз моделю- вання перевіряє продуктивність з точки зору затримки, часу відгуку, часу ви- конання та використання ресурсів. Існуючі алгоритми планування ресурсів, як- от генетичний алгоритм (GA), покращена оптимізація роїв частинок (IPSO) і моделі на основі LSTM, порівнюються із запропонованою моделлю для під- твердження кращої продуктивності

    Recent Developments in Smart Healthcare

    Get PDF
    Medicine is undergoing a sector-wide transformation thanks to the advances in computing and networking technologies. Healthcare is changing from reactive and hospital-centered to preventive and personalized, from disease focused to well-being centered. In essence, the healthcare systems, as well as fundamental medicine research, are becoming smarter. We anticipate significant improvements in areas ranging from molecular genomics and proteomics to decision support for healthcare professionals through big data analytics, to support behavior changes through technology-enabled self-management, and social and motivational support. Furthermore, with smart technologies, healthcare delivery could also be made more efficient, higher quality, and lower cost. In this special issue, we received a total 45 submissions and accepted 19 outstanding papers that roughly span across several interesting topics on smart healthcare, including public health, health information technology (Health IT), and smart medicine
    corecore