12,365 research outputs found

    An infrastructure service recommendation system for cloud applications with real-time QoS requirement constraints

    Get PDF
    The proliferation of cloud computing has revolutionized the hosting and delivery of Internet-based application services. However, with the constant launch of new cloud services and capabilities almost every month by both big (e.g., Amazon Web Service and Microsoft Azure) and small companies (e.g., Rackspace and Ninefold), decision makers (e.g., application developers and chief information officers) are likely to be overwhelmed by choices available. The decision-making problem is further complicated due to heterogeneous service configurations and application provisioning QoS constraints. To address this hard challenge, in our previous work, we developed a semiautomated, extensible, and ontology-based approach to infrastructure service discovery and selection only based on design-time constraints (e.g., the renting cost, the data center location, the service feature, etc.). In this paper, we extend our approach to include the real-time (run-time) QoS (the end-to-end message latency and the end-to-end message throughput) in the decision-making process. The hosting of next-generation applications in the domain of online interactive gaming, large-scale sensor analytics, and real-time mobile applications on cloud services necessitates the optimization of such real-time QoS constraints for meeting service-level agreements. To this end, we present a real-time QoS-aware multicriteria decision-making technique that builds over the well-known analytic hierarchy process method. The proposed technique is applicable to selecting Infrastructure as a Service (IaaS) cloud offers, and it allows users to define multiple design-time and real-time QoS constraints or requirements. These requirements are then matched against our knowledge base to compute the possible best fit combinations of cloud services at the IaaS layer. We conducted extensive experiments to prove the feasibility of our approach

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    VA Laundry and Linen Distribution Optimization

    Get PDF
    Linen is a backstage service that is critical for a hospital¡¯s functioning. Our team created a refill and distribution system to optimize linen use at the VA Boston Healthcare System-West Roxbury Campus, by applying lean concepts to improve process efficiency and provide the best patient care. The VA-Brockton Laundry facility, which cleans linen for the New England VISN, currently utilizes a steam system for washing, drying and ironing. An investigation of the economic and environmental aspects of replacement equipment options was also covered

    Analysis on limitation of Using Solar Fraction Ratio as Solar Hot Water System Design and Evaluation Index

    Get PDF
    AbstractSolar fraction ratio is a key index and reference of solar hot water system design, andis also a key factor to evaluate solar hot water system according to Evaluation Standard for Application of Renewable Energy in Buildings in China. By analyzing relevant inspection data of actual projects, it was found that using solar fraction ratio to evaluate the actual running systems has certain limitation, which cannot reasonably reflect the actual supplementation of conventional energy, especially with the residential buildings applying central solar hot water system. Based on the total energy consumption control concept raised by government during the Twelfth Five-Year Plan period, the actual supplementation level of conventional energy should be used as a factor to evaluate solar hot water system. This study will analyze the limitation of solar fraction ratio in design and evaluation, and propose corresponding ideas of solution as references for relevant design and evaluation professionals
    • …
    corecore