929 research outputs found

    Rule-Based System Architecting of Earth Observing Systems: Earth Science Decadal Survey

    Get PDF
    This paper presents a methodology to explore the architectural trade space of Earth observing satellite systems, and applies it to the Earth Science Decadal Survey. The architecting problem is formulated as a combinatorial optimization problem with three sets of architectural decisions: instrument selection, assignment of instruments to satellites, and mission scheduling. A computational tool was created to automatically synthesize architectures based on valid combinations of options for these three decisions and evaluate them according to several figures of merit, including satisfaction of program requirements, data continuity, affordability, and proxies for fairness, technical, and programmatic risk. A population-based heuristic search algorithm is used to search the trade space. The novelty of the tool is that it uses a rule-based expert system to model the knowledge-intensive components of the problem, such as scientific requirements, and to capture the nonlinear positive and negative interactions between instruments (synergies and interferences), which drive both requirement satisfaction and cost. The tool is first demonstrated on the past NASA Earth Observing System program and then applied to the Decadal Survey. Results suggest that the Decadal Survey architecture is dominated by other more distributed architectures in which DESDYNI and CLARREO are consistently broken down into individual instruments."La Caixa" FoundationCharles Stark Draper LaboratoryGoddard Space Flight Cente

    A Minimum-Cost Flow Model for Workload Optimization on Cloud Infrastructure

    Full text link
    Recent technology advancements in the areas of compute, storage and networking, along with the increased demand for organizations to cut costs while remaining responsive to increasing service demands have led to the growth in the adoption of cloud computing services. Cloud services provide the promise of improved agility, resiliency, scalability and a lowered Total Cost of Ownership (TCO). This research introduces a framework for minimizing cost and maximizing resource utilization by using an Integer Linear Programming (ILP) approach to optimize the assignment of workloads to servers on Amazon Web Services (AWS) cloud infrastructure. The model is based on the classical minimum-cost flow model, known as the assignment model.Comment: 2017 IEEE 10th International Conference on Cloud Computin

    Autonomic Cloud Computing: Open Challenges and Architectural Elements

    Full text link
    As Clouds are complex, large-scale, and heterogeneous distributed systems, management of their resources is a challenging task. They need automated and integrated intelligent strategies for provisioning of resources to offer services that are secure, reliable, and cost-efficient. Hence, effective management of services becomes fundamental in software platforms that constitute the fabric of computing Clouds. In this direction, this paper identifies open issues in autonomic resource provisioning and presents innovative management techniques for supporting SaaS applications hosted on Clouds. We present a conceptual architecture and early results evidencing the benefits of autonomic management of Clouds.Comment: 8 pages, 6 figures, conference keynote pape

    Optimal deployment of components of cloud-hosted application for guaranteeing multitenancy isolation

    Get PDF
    One of the challenges of deploying multitenant cloud-hosted services that are designed to use (or be integrated with) several components is how to implement the required degree of isolation between the components when there is a change in the workload. Achieving the highest degree of isolation implies deploying a component exclusively for one tenant; which leads to high resource consumption and running cost per component. A low degree of isolation allows sharing of resources which could possibly reduce cost, but with known limitations of performance and security interference. This paper presents a model-based algorithm together with four variants of a metaheuristic that can be used with it, to provide near-optimal solutions for deploying components of a cloud-hosted application in a way that guarantees multitenancy isolation. When the workload changes, the model based algorithm solves an open multiclass QN model to determine the average number of requests that can access the components and then uses a metaheuristic to provide near-optimal solutions for deploying the components. Performance evaluation showed that the obtained solutions had low variability and percent deviation when compared to the reference/optimal solution. We also provide recommendations and best practice guidelines for deploying components in a way that guarantees the required degree of isolation

    Augmenting the Space Domain Awareness Ground Architecture via Decision Analysis and Multi-Objective Optimization

    Get PDF
    Purpose — The US Government is challenged to maintain pace as the world’s de facto provider of space object cataloging data. Augmenting capabilities with nontraditional sensors present an expeditious and low-cost improvement. However, the large tradespace and unexplored system of systems performance requirements pose a challenge to successful capitalization. This paper aims to better define and assess the utility of augmentation via a multi-disiplinary study. Design/methodology/approach — Hypothetical telescope architectures are modeled and simulated on two separate days, then evaluated against performance measures and constraints using multi-objective optimization in a heuristic algorithm. Decision analysis and Pareto optimality identifies a set of high-performing architectures while preserving decision-maker design flexibility. Findings — Capacity, coverage and maximum time unobserved are recommended as key performance measures. A total of 187 out of 1017 architectures were identified as top performers. A total of 29% of the sensors considered are found in over 80% of the top architectures. Additional considerations further reduce the tradespace to 19 best choices which collect an average of 49–51 observations per space object with a 595–630 min average maximum time unobserved, providing redundant coverage of the Geosynchronous Orbit belt. This represents a three-fold increase in capacity and coverage and a 2 h (16%) decrease in the maximum time unobserved compared to the baseline government-only architecture as-modeled. Originality/value — This study validates the utility of an augmented network concept using a physics-based model and modern analytical techniques. It objectively responds to policy mandating cataloging improvements without relying solely on expert-derived point solutions

    Theoretical Analysis for Scale-down-Aware Service Allocation in Cloud Storage Systems

    Get PDF
    Servcie allocation algorithms have been drawing popularity in cloudcomputing research community. There has been lots of research onimprovingservice allocation schemes for high utilization, latency reductionand VM migration enfficient, but few work focus on energy consumptionaffected by instance placement in data centers. In this paper we propose an algorithm in which to maximize the number of freed-up machines in data centers, machines that host purely scale-down instances, which are reuiqred to be shut down for energy saving at certain points of time. We intuitively employ a probability partitioning mechanism to schedule services such that the goal of the maximization can be achieved. Furthermore we perform a set of experiments to test the partitioning rules, which show that the proposed algorithms can dynamically increase the number of freed-up machines substantially.DOI:http://dx.doi.org/10.11591/ijece.v3i1.179

    Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    Full text link
    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2010), Las Vegas, USA, July 12-15, 201
    • …
    corecore