2,062 research outputs found

    Multi-Criteria Decision-Making Approach for Container-based Cloud Applications: The SWITCH and ENTICE Workbenches

    Get PDF
    Many emerging smart applications rely on the Internet of Things (IoT) to provide solutions to time-critical problems. When building such applications, a software engineer must address multiple Non-Functional Requirements (NFRs), including requirements for fast response time, low communication latency, high throughput, high energy efficiency, low operational cost and similar. Existing modern container-based software engineering approaches promise to improve the software lifecycle; however, they fail short of tools and mechanisms for NFRs management and optimisation. Our work addresses this problem with a new decision-making approach based on a Pareto Multi-Criteria optimisation. By using different instance configurations on various geo-locations, we demonstrate the suitability of our method, which narrows the search space to only optimal instances for the deployment of the containerised microservice.This solution is included in two advanced software engineering environments, the SWITCH workbench, which includes an Interactive Development Environment (IDE) and the ENTICE Virtual Machine and container images portal. The developed approach is particularly useful when building, deploying and orchestrating IoT applications across multiple computing tiers, from Edge-Cloudlet to Fog-Cloud data centres

    Performance-oriented Cloud Provisioning: Taxonomy and Survey

    Full text link
    Cloud computing is being viewed as the technology of today and the future. Through this paradigm, the customers gain access to shared computing resources located in remote data centers that are hosted by cloud providers (CP). This technology allows for provisioning of various resources such as virtual machines (VM), physical machines, processors, memory, network, storage and software as per the needs of customers. Application providers (AP), who are customers of the CP, deploy applications on the cloud infrastructure and then these applications are used by the end-users. To meet the fluctuating application workload demands, dynamic provisioning is essential and this article provides a detailed literature survey of dynamic provisioning within cloud systems with focus on application performance. The well-known types of provisioning and the associated problems are clearly and pictorially explained and the provisioning terminology is clarified. A very detailed and general cloud provisioning classification is presented, which views provisioning from different perspectives, aiding in understanding the process inside-out. Cloud dynamic provisioning is explained by considering resources, stakeholders, techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table

    Efficient Energy Management in Cloud Data center using VM Consolidation

    Get PDF
    Cloud computing is a model which can fast provisioned and released the computing resources by using minimum number of management effort. This can be done by the user without doing any communication with the cloud service providers. Cloud provide the computing resources, on-demand network access which is pooled together and it can be provisioned dynamically according to the user needs. Due to the large application, more number of computing nodes are required. A large amount of electrical energy is consumed due to the establishment of the data center. There is a problem of carbon dioxide emissions and increasing cost of operation due to the formation of large data center. A consolidation of virtual machines technique is proposed in our thesis to reduce the energy consumption and to maximize the utilization of the computing resources in the data center. Several virtual machines are taken together into a single physical machine in the consolidation technique and it helps to decrease the consumption of energy by putting idle server into inactive mode. A number of active hosts is minimized by continuously reallocating VMs using live migration. In each migration, Service Level Agreement(SLA) violations may occur, hence it is required to reduce the number of migrations.In order to satisfy quality of services in cloud computing environment, our proposed techniques mainly performs the following functions:(i)reducing the consumption of energy, (ii) minimize the number of migrations and (iii) minimize the percentage of SLA violations. Initially we detect whether any host is overloaded or not. The Overloaded host is detected by considering CPU utilization as a threshold Value. If an overloaded host is detected then some virtual machines are migrated from it by using VM selection policy. After selection of the VMs, the next step is to place the new VMs. For VM placement, the greedy algorithms such as Best Fit Decreasing(BFD) and Modified First Fit Decreasing(MFFD) are used in this thesis. The proposed techniques are compared with the existing EEDVM and PALVM techniques. Using proposed AUTREC technique there is 8% improved in energy consumption, 3% in number of migrations, 10% in SLA violation and 12% in host shutdown as compared to EEDVM technique. Using proposed DUTREC technique there is 9% improved in energy consumption, 6% in number of migrations, 20% in SLA violation and 13% in host shutdown as compared to PALVM technique

    A Systematic Literature Review on Task Allocation and Performance Management Techniques in Cloud Data Center

    Full text link
    As cloud computing usage grows, cloud data centers play an increasingly important role. To maximize resource utilization, ensure service quality, and enhance system performance, it is crucial to allocate tasks and manage performance effectively. The purpose of this study is to provide an extensive analysis of task allocation and performance management techniques employed in cloud data centers. The aim is to systematically categorize and organize previous research by identifying the cloud computing methodologies, categories, and gaps. A literature review was conducted, which included the analysis of 463 task allocations and 480 performance management papers. The review revealed three task allocation research topics and seven performance management methods. Task allocation research areas are resource allocation, load-Balancing, and scheduling. Performance management includes monitoring and control, power and energy management, resource utilization optimization, quality of service management, fault management, virtual machine management, and network management. The study proposes new techniques to enhance cloud computing work allocation and performance management. Short-comings in each approach can guide future research. The research's findings on cloud data center task allocation and performance management can assist academics, practitioners, and cloud service providers in optimizing their systems for dependability, cost-effectiveness, and scalability. Innovative methodologies can steer future research to fill gaps in the literature

    Climbing Up Cloud Nine: Performance Enhancement Techniques for Cloud Computing Environments

    Get PDF
    With the transformation of cloud computing technologies from an attractive trend to a business reality, the need is more pressing than ever for efficient cloud service management tools and techniques. As cloud technologies continue to mature, the service model, resource allocation methodologies, energy efficiency models and general service management schemes are not yet saturated. The burden of making this all tick perfectly falls on cloud providers. Surely, economy of scale revenues and leveraging existing infrastructure and giant workforce are there as positives, but it is far from straightforward operation from that point. Performance and service delivery will still depend on the providers’ algorithms and policies which affect all operational areas. With that in mind, this thesis tackles a set of the more critical challenges faced by cloud providers with the purpose of enhancing cloud service performance and saving on providers’ cost. This is done by exploring innovative resource allocation techniques and developing novel tools and methodologies in the context of cloud resource management, power efficiency, high availability and solution evaluation. Optimal and suboptimal solutions to the resource allocation problem in cloud data centers from both the computational and the network sides are proposed. Next, a deep dive into the energy efficiency challenge in cloud data centers is presented. Consolidation-based and non-consolidation-based solutions containing a novel dynamic virtual machine idleness prediction technique are proposed and evaluated. An investigation of the problem of simulating cloud environments follows. Available simulation solutions are comprehensively evaluated and a novel design framework for cloud simulators covering multiple variations of the problem is presented. Moreover, the challenge of evaluating cloud resource management solutions performance in terms of high availability is addressed. An extensive framework is introduced to design high availability-aware cloud simulators and a prominent cloud simulator (GreenCloud) is extended to implement it. Finally, real cloud application scenarios evaluation is demonstrated using the new tool. The primary argument made in this thesis is that the proposed resource allocation and simulation techniques can serve as basis for effective solutions that mitigate performance and cost challenges faced by cloud providers pertaining to resource utilization, energy efficiency, and client satisfaction

    Efficiency of the rail sections in Brazilian railway system, using TOPSIS and a genetic algorithm to analyse optimized scenarios

    Get PDF
    A railway system plays a significant role in countries with large territorial dimensions. The Brazilian rail cargo system (BRCS), however, is focused on solid bulk for export. This paper investigates the extreme performances of BRCS through a new hybrid model that combines TOPSIS with a genetic algorithm for estimating the weights in optimized scenarios. In a second stage, the significance of selected variables was assessed. The transport of any type of cargo, a centralized control of the operation, and sharing the railway track pushing competition, and the diversification of services are significant for high performance. Public strategies are discussed.Indisponível

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022
    corecore