4,530 research outputs found

    An Industry-Based Study on the Efficiency Benefits of Utilising Public Cloud Infrastructure and Infrastructure as Code Tools in the IT Environment Creation Process

    Get PDF
    The traditional approaches to IT infrastructure management typically involve the procuring, housing and running of company-owned and maintained physical servers. In recent years, alternative solutions to IT infrastructure management based on public cloud technologies have emerged. Infrastructure as a Service (IaaS), also known as public cloud infrastructure, allows for the on-demand provisioning of IT infrastructure resources via the Internet. Cloud Service Providers (CSP) such as Amazon Web Services (AWS) offer integration of their cloud-based infrastructure with Infrastructure as Code (IaC) tools. These tools allow for the entire configuration of public cloud based infrastructure to be scripted out and defined as code. This thesis hypothesises that the correct utilization of IaaS and IaC can offer an organisation a more efficient type of IT infrastructure creation system than that of the organisations traditional method. To investigate this claim, an industry-based case study and survey questionnaire were carried out as part of this body of work. The case study involved the replacement of a manually managed IT infrastructure with that of the public cloud, the creation of which was automated via a framework consisting of IaC and related automation tools. The survey questionnaire was created with the intent to corroborate or refute the results obtained in the case study in the context of a wider audience of organisations. The results show that the correct utilization of IaaS and IaC technologies can provide greater efficiency in the management of IT networks than the traditional approac

    Hybrid Approach for Resource Provisioning in Cloud Computing

    Get PDF
    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Elasticity of resources is considered as a key characteristic of cloud computing using this key characteristic; internet services are allocated the only-needed resources. This allocation of resources however should not be at the expense of the services’ performance. Allocation of resources without degrading performance is called resource provisioning. Resource provisioning does not only support the elasticity of resources, but also enhances cost efficiency and sustainability. The goal of this work is to investigate resource provisioning to increase the percentage of resources utilization without degrading the performance so that the power consumption of the cloud data centers is reduced. To achieve this goal, a hybrid-approach for resource provisioning is developed. In this approach, a list of virtual machines is requested, passed to a selection algorithm, sorting the machines according to their load, compute the threshold of the machines’ load, and combining the high load with low load from two different virtual machines on one super virtual machine. The approach was implemented in a simulator called CloudSim. It was used to run two sets of experiments. The first is to measure the power consumption of the data center as whole and hosts as well. And the second is concerned with the processing times and memory usage.  The results have shown that this approach outperforms traditional counterparts in resource provisioning. The results showed that the hybrid approach achieved reduction of (5.85 MW/s) in power consumption compared with the traditional counterparts for the whole data center, as well as reduction of (2.48 MW/s) in power consumption for the hosts

    DoKnowMe: Towards a Domain Knowledgedriven Methodology for Performance Evaluation

    Get PDF
    Software engineering considers performance evaluation to be one of the key portions of software quality assurance. Unfortunately, there seems to be a lack of standard methodologies for performance evaluation even in the scope of experimental computer science. Inspired by the concept of “instantiation” in object-oriented programming, we distinguish the generic performance evaluation logic from the distributed and ad-hoc relevant studies, and develop an abstract evaluation methodology (by analogy of “class”) we name Domain Knowledge-driven Methodology (DoKnowMe). By replacing five predefined domain-specific knowledge artefacts, DoKnowMe can be instantiated into specific methodologies (by analogy of “object”) to guide evaluators in performance evaluation of different software and even computing systems. We also propose a generic validation framework with four indicators (i.e. usefulness, feasibility, effectiveness and repeatability), and use it to validate DoKnowMe in the Cloud services evaluation domain. Given the positive and promising validation result, we plan to integrate more common evaluation strategies to improve DoKnowMe and further focus on the performance evaluation of Cloud autoscaler systems

    Evaluating and Characterizing the Performance of 802.11 Networks

    Get PDF
    The 802.11 standard has become the dominant protocol for Wireless Local Area Networks (WLANs). As an indication of its current and growing popularity, it is estimated that over 20 billion WiFi chipsets will be shipped between 2016 and 2021. In a span of less than 20 years, the speed of these networks has increased from 11 Mbps to several Gbps. The ever-increasing demand for more bandwidth required by applications such as large downloads, 4K video streaming, and virtual reality applications, along with the problems caused by interfering WiFi and non-WiFi devices operating on a shared spectrum has made the evaluation, understanding, and optimization of the performance of 802.11 networks an important research topic. In 802.11 networks, highly variable channel conditions make conducting valid, repeatable, and realistic experiments extremely challenging. Highly variable channel conditions, although representative of what devices actually experience, are often avoided in order to conduct repeatable experiments. In this thesis, we study existing methodologies for the empirical evaluation of 802.11 networks. We show that commonly used methodologies, such as running experiments multiple times and reporting the average along with the confidence interval, can produce misleading results in some environments. We propose and evaluate a new empirical evaluation methodology that expands the environments in which repeatable evaluations can be conducted for the purpose of comparing competing alternatives. Even with our new methodology, in environments with highly variable channel conditions, distinguishing statistically significant differences can be very difficult because variations in channel conditions lead to large confidence intervals. Moreover, running many experiments is usually very time consuming. Therefore, we propose and evaluate a trace-based approach that combines the realism of experiments with the repeatability of simulators. A key to our approach is that we capture data related to properties of the channel that impact throughput. These traces can be collected under conditions representative of those in which devices are likely to be used and then used to evaluate different algorithms or systems, resulting in fair comparisons because the alternatives are exposed to identical channel conditions. Finally, we characterize the relationships between the numerous transmission rates in 802.11n networks with the purpose of reducing the complexities caused by the large number of transmission rates when finding the optimal combination of physical-layer features. We find that there are strong relationships between most of the transmission rates over extended periods of time even in environments that involve mobility and experience interference. This work demonstrates that there are significant opportunities for utilizing relationships between rate configurations in designing algorithms that must choose the best combination of physical-layer features to use from a very large space of possibilities

    An Adaptable Framework to Deploy Complex Applications onto Multi-cloud Platforms

    No full text
    International audienceCloud computing is nowadays a popular technology for hosting IT services. However, deploying and reconfiguring complex applications involving multiple software components, which are distributed on many virtual machines running on single or multi-cloud platforms, is error-prone and time-consuming for human administrators. Existing deployment frameworks are most of the time either dedicated to a unique type of application (e.g. JEE applications) or address a single cloud platform (e.g. Amazon EC2). This paper presents a novel distributed application management framework for multi-cloud platforms. It provides a Domain Specific Language (DSL) which allows to describe applications and their execution environments (cloud platforms) in a hierarchical way in order to provide a fine-grained management. This framework implements an asynchronous and parallel deployment protocol which accelerates and make resilient the deployment process. A prototype has been developed to serve conducting intensive experiments with different type of applications (e.g. OSGi application and ubiquitous big data analytics for IoT) over disparate cloud models (e.g. private, hybrid, and multi-cloud), which validate the genericity of the framework. These experiments also demonstrate its efficiency comparing to existing frameworks such as Cloudify

    Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments.

    Full text link
    This thesis investigates some of the sensing and perception challenges faced by multi-robot teams equipped with LIDAR and camera sensors. Multi-robot teams are ideal for deployment in large, real-world environments due to their ability to parallelize exploration, reconnaissance or mapping tasks. However, such domains also impose additional requirements, including the need for a) online algorithms (to eliminate stopping and waiting for processing to finish before proceeding) and b) scalability (to handle data from many robots distributed over a large area). These general requirements give rise to specific algorithmic challenges, including 1) online maintenance of large, coherent maps covering the explored area, 2) online estimation of communication properties in the presence of buildings and other interfering structure, and 3) online fusion and segmentation of multiple sensors to aid in object detection. The contribution of this thesis is the introduction of novel approaches that leverage grid-maps and sparse multi-variate gaussian inference to augment the capability of multi-robot teams operating in urban, indoor-outdoor environments by improving the state of the art of map rasterization, signal strength prediction, colored point cloud segmentation, and reliable camera calibration. In particular, we introduce a map rasterization technique for large LIDAR-based occupancy grids that makes online updates possible when data is arriving from many robots at once. We also introduce new online techniques for robots to predict the signal strength to their teammates by combining LIDAR measurements with signal strength measurements from their radios. Processing fused LIDAR+camera point clouds is also important for many object-detection pipelines. We demonstrate a near linear-time online segmentation algorithm to this domain. However, maintaining the calibration of a fleet of 14 robots made this approach difficult to employ in practice. Therefore we introduced a robust and repeatable camera calibration process that grounds the camera model uncertainty in pixel error, allowing the system to guide novices and experts alike to reliably produce accurate calibrations.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113516/1/jhstrom_1.pd

    Towards Measuring and Understanding Performance in Infrastructure- and Function-as-a-Service Clouds

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics.Objective. The goal of this licentiate thesis is to measure and understand performance in IaaS and FaaS clouds. My PhD thesis will extend and leverage this understanding to propose solutions for building performance-optimized FaaS cloud applications.Method.\ua0To achieve this goal, quantitative and qualitative research methods are used, including experimental research, artifact analysis, and literature review.Findings.\ua0The thesis proposes a cloud benchmarking methodology to estimate application performance in IaaS clouds, characterizes typical FaaS applications, identifies gaps in literature on FaaS performance evaluations, and examines the reproducibility of reported FaaS performance experiments. The evaluation of the benchmarking methodology yielded promising results for benchmark-based application performance estimation under selected conditions. Characterizing 89 FaaS applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that the majority of studies do not follow reproducibility principles on cloud experimentation.Future Work. Future work will propose a suite of application performance benchmarks for FaaS, which is instrumental for evaluating candidate solutions towards building performance-optimized FaaS applications
    • …
    corecore