83 research outputs found

    Case Studies for achieving a Return on Investment with a Hardware Refresh in Organizations with Small Data Centers

    Get PDF
    Data centers have been highlighted as a major energy consumer and there has been an increasing trend towards the consolidation of smaller data centers into larger facilities. Yet, small data centers exist for a variety of reasons and account for a significant portion of the total number of servers in the US. Frequent refreshes of IT hardware has emerged as a trend in hyper-scale data centers but little attention has been paid to how these savings can be achieved in small data centers. This work provides a comprehensive framework for the energy saving opportunities, while determining when a return on investment can be achieved to enable small data center operators to create credible business cases for hardware refreshes. Various data center deployment scenarios are used as case studies (based on real-life datasets) to validate the proposed concepts

    A NUI Based Multiple Perspective Variability Modelling CASE Tool

    Get PDF
    With current trends towards moving variability from hardware to software, and given the increasing desire to postpone design decisions as much as is economically feasible, managing the variability from requirements elicitation to implementation is becoming a primary business requirement in the product line engineering process. One of the main challenges in variability management is the visualization and management of industry size variability models. In this demonstration, we introduce our CASE tool, MUSA. MUSA is designed around our work on multiple perspective variability modeling and is implemented using the state-of-the-art in NUI, multi-touch interfaces, giving it the power and flexibility to create and manage large-scale variability models with relative ease

    SWOT Analysis of Information Security Management System ISO 27001

    Get PDF
    Information security is a main concern for many organisations with no signs of decreasing urgency in the coming years. To address this a structured approach is required, with the ISO 27000 series being one of the most popular practices for managing Information Security. In this work, we used a combination of qualitative research methods to conduct a SWOT analysis on the ISMS. The findings from the SWOT were then validated using a survey instrument. Finally, the results were validated and analysed using statistical methods. Our findings show that there was a generally positive view on the 'Strengths' and 'Opportunities' compared to that of 'Weaknesses' and 'Threats'. We identified statistically significant differences in the perception of 'Strengths' and 'Opportunities' across groups but also found that there is no significant variance in the perception of 'Threats'. The SWOT produced will help practitioners and researchers tailor ways to enhance ISMS using existing techniques such as TOWS matrix

    A Scalable Multiple Perspective Variability Management CASE Tool

    Get PDF
    One of the main challenges in variability management is the visualization and management of industry size variability models. In this work, we introduce our CASE tool MUSA that uses a multiple perspective approach to variability modeling and is implemented using state-of-the-art multi-touch interfaces. This gives it the power and flexibility to create and manage large-scale variability models

    Using a Software Product Line Approach in Designing Grid Services

    Get PDF
    Software Product Line engineering (SPL) has emerged in recent years as a planned approach for software reuse within families of related software products. In SPL, variability and commonality among different members of a family is studied and core assists (system architecture, software components, documentation, etc.) are designed accordingly to maximize reuse within the family members. In this work, we look at how this emerging technology can be relevant to the domain of grid computing and the design of grid services. The GeneGrid project is used to demonstrate the SPL approach

    GSi Compliant RAS for Public Private Sector Partnership

    Get PDF
    With the current trend of moving intelligent services and administration towards the public private partnership, and the security controls that are currently in place, the shareable data modeling initiative has become a controversial issue. Existing applications often rely on isolation or trusted networks for their access control or security, whereas untrusted wide area networks pay little attention to the authenticity, integrity or confidentiality of the data they transport. In this paper, we examine the issues that must be considered when providing network access to an existing probation service environment. We describe how we intend to implement the proposed solution in one probation service application. We describe the architecture that allows remote access to the legacy application, providing it with encrypted communications and strongly authenticated access control but without requiring any modifications to the underlying application

    Using an Architecture Description Language to Model a Large- Scale Information System – An Industrial Experience Report

    Get PDF
    An organisation that had developed a large Information System wanted to embark on a programme of significant evolution for the system. As a precursor to this, it was decided to create a comprehensive architectural description. T his undertaking faced a number of challenges, including a low general awareness of software modelling and software architecture practices . The approach taken for this project included the definition of a simple, specific, architecture description language. This paper describes the experiences of the project and the ADL created as part of it

    A Cost Effective Cloud Datacenter Capacity Planning Method Based on Modality Cost Analysis

    Get PDF
    In resource provisioning for datacenters, an important issue is how resources may be allocated to an application such that the service level agreements (SLAs) are met. Resource provisioning is usually guided by intuitive or heuristic expectation of performance and existing user model. Provisioning based on such methodology, however, usually leads to more resources than are actually necessary. While such overprovisioning may guarantee performance, this guarantee may come at a very high cost. A quantitative performance estimate may guide the provider in making informed decisions about the right level of resources, so that acceptable service performance may be provided in a cost-effective manner. A quantitative estimate of application performance must consider its workload characteristics. Due to the complex workload characteristics of commercial software, estimation of its performance and provisioning to optimize for cost is not straightforward. In this work we looked at breaking the application into isolated modalities (modality is a scenario in which an application is used, for example, instant messaging, and voice calls are two different modalities of a media application) and measuring resource cost per modality as an effective methodology to provision datacenters to optimize for performance and minimize cost. When breaking the application into modalities, resource cost is assessed in isolation. Results are then aggregated to estimate the overall resource provisioning requirements. A validation tool is used to simulate the load and validate the assumptions. This was applied to a commercially available solution and validated in a datacenter setting

    Modality Cost Analysis Based Methodology for Cost Effective Datacenter Capacity Planning in the Cloud

    Get PDF
    In resource provisioning for datacenters, an important issue is how resources may be allocated to an application such that the service level agreements (SLAs) are met. Resource provisioning is usually guided by intuitive or heuristic expectation of performance and existing user model. Provisioning based on such methodology, however, usually leads to more resources than are actually necessary. While such overprovisioning may guarantee performance, this guarantee may come at a very high cost. A quantitative performance estimate may guide the provider in making informed decisions about the right level of resources, so that acceptable service performance may be provided in a cost-effective manner. A quantitative estimate of application performance must consider its workload characteristics. Due to the complex workload characteristics of commercial software, estimation of its performance and provisioning to optimize for cost is not straightforward. In this work we looked at breaking the application into isolated modalities (modality is a scenario in which an application is used, for example, instant messaging, and voice calls are two different modalities of a media application) and measuring resource cost per modality as an effective methodology to provision datacenters to optimize for performance and minimize cost. When breaking the application into modalities, resource cost is assessed in isolation. Results are then aggregated to estimate the overall resource provisioning requirements. A validation tool is used to simulate the load and validate the assumptions. This was applied to a commercially available solution and validated in a datacenter setting
    • …
    corecore