1,306 research outputs found

    A Big Data Analyzer for Large Trace Logs

    Full text link
    Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents BiDAl, a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center.Comment: 26 pages, 10 figure

    A Minimum-Cost Flow Model for Workload Optimization on Cloud Infrastructure

    Full text link
    Recent technology advancements in the areas of compute, storage and networking, along with the increased demand for organizations to cut costs while remaining responsive to increasing service demands have led to the growth in the adoption of cloud computing services. Cloud services provide the promise of improved agility, resiliency, scalability and a lowered Total Cost of Ownership (TCO). This research introduces a framework for minimizing cost and maximizing resource utilization by using an Integer Linear Programming (ILP) approach to optimize the assignment of workloads to servers on Amazon Web Services (AWS) cloud infrastructure. The model is based on the classical minimum-cost flow model, known as the assignment model.Comment: 2017 IEEE 10th International Conference on Cloud Computin

    BonFIRE: A multi-cloud test facility for internet of services experimentation

    Get PDF
    BonFIRE offers a Future Internet, multi-site, cloud testbed, targeted at the Internet of Services community, that supports large scale testing of applications, services and systems over multiple, geographically distributed, heterogeneous cloud testbeds. The aim of BonFIRE is to provide an infrastructure that gives experimenters the ability to control and monitor the execution of their experiments to a degree that is not found in traditional cloud facilities. The BonFIRE architecture has been designed to support key functionalities such as: resource management; monitoring of virtual and physical infrastructure metrics; elasticity; single document experiment descriptions; and scheduling. As for January 2012 BonFIRE release 2 is operational, supporting seven pilot experiments. Future releases will enhance the offering, including the interconnecting with networking facilities to provide access to routers, switches and bandwidth-on-demand systems. BonFIRE will be open for general use late 2012

    DRIVER Technology Watch Report

    Get PDF
    This report is part of the Discovery Workpackage (WP4) and is the third report out of four deliverables. The objective of this report is to give an overview of the latest technical developments in the world of digital repositories, digital libraries and beyond, in order to serve as theoretical and practical input for the technical DRIVER developments, especially those focused on enhanced publications. This report consists of two main parts, one part focuses on interoperability standards for enhanced publications, the other part consists of three subchapters, which give a landscape picture of current and surfacing technologies and communities crucial to DRIVER. These three subchapters contain the GRID, CRIS and LTP communities and technologies. Every chapter contains a theoretical explanation, followed by case studies and the outcomes and opportunities for DRIVER in this field

    Hybrid Approach for Resource Provisioning in Cloud Computing

    Get PDF
    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. Elasticity of resources is considered as a key characteristic of cloud computing using this key characteristic; internet services are allocated the only-needed resources. This allocation of resources however should not be at the expense of the services’ performance. Allocation of resources without degrading performance is called resource provisioning. Resource provisioning does not only support the elasticity of resources, but also enhances cost efficiency and sustainability. The goal of this work is to investigate resource provisioning to increase the percentage of resources utilization without degrading the performance so that the power consumption of the cloud data centers is reduced. To achieve this goal, a hybrid-approach for resource provisioning is developed. In this approach, a list of virtual machines is requested, passed to a selection algorithm, sorting the machines according to their load, compute the threshold of the machines’ load, and combining the high load with low load from two different virtual machines on one super virtual machine. The approach was implemented in a simulator called CloudSim. It was used to run two sets of experiments. The first is to measure the power consumption of the data center as whole and hosts as well. And the second is concerned with the processing times and memory usage.  The results have shown that this approach outperforms traditional counterparts in resource provisioning. The results showed that the hybrid approach achieved reduction of (5.85 MW/s) in power consumption compared with the traditional counterparts for the whole data center, as well as reduction of (2.48 MW/s) in power consumption for the hosts

    CloudSim Express: A Novel Framework for Rapid Low Code Simulation of Cloud Computing Environments

    Full text link
    Cloud computing environment simulators enable cost-effective experimentation of novel infrastructure designs and management approaches by avoiding significant costs incurred from repetitive deployments in real Cloud platforms. However, widely used Cloud environment simulators compromise on usability due to complexities in design and configuration, along with the added overhead of programming language expertise. Existing approaches attempting to reduce this overhead, such as script-based simulators and Graphical User Interface (GUI) based simulators, often compromise on the extensibility of the simulator. Simulator extensibility allows for customization at a fine-grained level, thus reducing it significantly affects flexibility in creating simulations. To address these challenges, we propose an architectural framework to enable human-readable script-based simulations in existing Cloud environment simulators while minimizing the impact on simulator extensibility. We implement the proposed framework for the widely used Cloud environment simulator, the CloudSim toolkit, and compare it against state-of-the-art baselines using a practical use case. The resulting framework, called CloudSim Express, achieves extensible simulations while surpassing baselines with over a 71.43% reduction in code complexity and an 89.42% reduction in lines of code

    Internet of Things Based Technology for Smart Home System: A Generic Framework

    Get PDF
    Internet of Things (IoT) is a technology which enables computing devices, physical and virtual objects/devices to be connected to the internet so that users can control and monitor devices. The IoT offers huge potential for development of various applications namely: e-governance, environmental monitoring, military applications, infrastructure management, industrial applications, energy management, healthcare monitoring, home automation and transport systems. In this paper, the brief overview of existing frameworks for development of IoT applications, techniques to develop smart home applications using existing IoT frameworks, and a new generic framework for the development of IoTbasedsmart home system is presented. The proposed generic framework comprises various modules such as Auto-Configuration and Management, Communication Protocol, Auto-Monitoring and Control, and Objects Access Control. The architecture of the new generic framework and the functionality of various modules in the framework are also presented. The proposed generic framework is helpful for making every house as smart house to increase the comfort of inhabitants. Each of the components of generic framework is robust in nature in providing services at any time. The components of smart home system are designed to take care of various issues such as scalability, interoperability, device adaptability, security and privacy. The proposed generic framework is designed to work on all vendor boards and variants of Linux and Windows operating system
    corecore