4,928 research outputs found

    A Cost-Effective Critical Path Approach for Service Priority Optimization in the Grid Computing Economy

    Get PDF
    The advancement in the utilization and technologies of the Internet has led to the rapid growth of grid computing; and the perpetuating demand for grid computing resources calls for an incentive-compatible solution to the imminent QoS problem. This paper examines the optimal service priority selection problem that a grid computing network user will confront. We model grid services for a multi-subtask request as a prioritized PERT graph and prove that the localized conditional critical path, which is based on the cost-minimizing priority selection for each node, sets the lower bound for the length of cost-effective critical path that commits the optimal solution. We also propose a heuristic algorithm for relaxing the nodes on the noncritical paths with respect to a given critical path.link_to_subscribed_fulltex

    Cost-Efficient Scheduling for Deadline Constrained Grid Workflows

    Get PDF
    Cost optimization for workflow scheduling while meeting deadline is one of the fundamental problems in utility computing. In this paper, a two-phase cost-efficient scheduling algorithm called critical chain is presented. The proposed algorithm uses the concept of slack time in both phases. The first phase is deadline distribution over all tasks existing in the workflow which is done considering critical path properties of workflow graphs. Critical chain uses slack time to iteratively select most critical sequence of tasks and then assigns sub-deadlines to those tasks. In the second phase named mapping step, it tries to allocate a server to each task considering task's sub-deadline. In the mapping step, slack time priority in selecting ready task is used to reduce deadline violation. Furthermore, the algorithm tries to locally optimize the computation and communication costs of sequential tasks exploiting dynamic programming. After proposing the scheduling algorithm, three measures for the superiority of a scheduling algorithm are introduced, and the proposed algorithm is compared with other existing algorithms considering the measures. Results obtained from simulating various systems show that the proposed algorithm outperforms four well-known existing workflow scheduling algorithms

    Cyber Defense Remediation in Energy Delivery Systems

    Get PDF
    The integration of Information Technology (IT) and Operational Technology (OT) in Cyber-Physical Systems (CPS) has resulted in increased efficiency and facilitated real-time information acquisition, processing, and decision making. However, the increase in automation technology and the use of the internet for connecting, remote controlling, and supervising systems and facilities has also increased the likelihood of cybersecurity threats that can impact safety of humans and property. There is a need to assess cybersecurity risks in the power grid, nuclear plants, chemical factories, etc. to gain insight into the likelihood of safety hazards. Quantitative cybersecurity risk assessment will lead to informed cyber defense remediation and will ensure the presence of a mitigation plan to prevent safety hazards. In this dissertation, using Energy Delivery Systems (EDS) as a use case to contextualize a CPS, we address key research challenges in managing cyber risk for cyber defense remediation. First, we developed a platform for modeling and analyzing the effect of cyber threats and random system faults on EDS\u27s safety that could lead to catastrophic damages. We developed a data-driven attack graph and fault graph-based model to characterize the exploitability and impact of threats in EDS. We created an operational impact assessment to quantify the damages. Finally, we developed a strategic response decision capability that presents optimal mitigation actions and policies that balance the tradeoff between operational resilience (tactical risk) and strategic risk. Next, we addressed the challenge of management of tactical risk based on a prioritized cyber defense remediation plan. A prioritized cyber defense remediation plan is critical for effective risk management in EDS. Due to EDS\u27s complexity in terms of the heterogeneous nature of blending IT and OT and Industrial Control System (ICS), scale, and critical processes tasks, prioritized remediation should be applied gradually to protect critical assets. We proposed a methodology for prioritizing cyber risk remediation plans by detecting and evaluating critical EDS nodes\u27 paths. We conducted evaluation of critical nodes characteristics based on nodes\u27 architectural positions, measure of centrality based on nodes\u27 connectivity and frequency of network traffic, as well as the controlled amount of electrical power. The model also examines the relationship between cost models of budget allocation for removing vulnerabilities on critical nodes and their impact on gradual readiness. The proposed cost models were empirically validated in an existing network ICS test-bed computing nodes criticality. Two cost models were examined, and although varied, we concluded the lack of correlation between types of cost models to most damageable attack path and critical nodes readiness. Finally, we proposed a time-varying dynamical model for the cyber defense remediation in EDS. We utilize the stochastic evolutionary game model to simulate the dynamic adversary of cyber-attack-defense. We leveraged the Logit Quantal Response Dynamics (LQRD) model to quantify real-world players\u27 cognitive differences. We proposed the optimal decision making approach by calculating the stable evolutionary equilibrium and balancing defense costs and benefits. Case studies on EDS indicate that the proposed method can help the defender predict possible attack action, select the related optimal defense strategy over time, and gain the maximum defense payoffs. We also leveraged software-defined networking (SDN) in EDS for dynamical cyber defense remediation. We presented an approach to aid the selection security controls dynamically in an SDN-enabled EDS and achieve tradeoffs between providing security and Quality of Service (QoS). We modeled the security costs based on end-to-end packet delay and throughput. We proposed a non-dominated sorting based multi-objective optimization framework which can be implemented within an SDN controller to address the joint problem of optimizing between security and QoS parameters by alleviating time complexity at O(MN2). The M is the number of objective functions, and N is the population for each generation, respectively. We presented simulation results that illustrate how data availability and data integrity can be achieved while maintaining QoS constraints

    Resource Renting for Periodical Cloud Workflow Applications

    Full text link
    [EN] Cloud computing is a new resource provisioning mechanism, which represents a convenient way for users to access different computing resources. Periodical workflow applications commonly exist in scientific and business analysis, among many other fields. One of the most challenging problems is to determine the right amount of resources for multiple periodical workflow applications. In this paper, the periodical workflow applications scheduling problem with total renting cost minimization is considered. The novelty of this work relies precisely on this objective function, which is more realistic in practice than the more commonly considered makespan minimization. An integer programming model is constructed for the problem under study. A Precedence Tree based Heuristic (PTH) is developed which considers three types of initial schedule construction methods. Based on the initial schedule, two improvement procedures are presented. The proposed methods are compared with existing algorithms for the related makespan based multiple workflow scheduling problem. Experimental and statistical results demonstrate the effectiveness and efficiency of the proposed algorithm.This work is supported by the National Natural Science Foundation of China (No. 61572127, 61272377), the Key Research & Development program in Jiangsu Province (No. BE2015728) and Collaborative Innovation Center of Wireless Communications Technology. Ruben Ruiz is partially supported by the Spanish Ministry of Economy and Competitiveness, under the project "SCHEYARD-Optimization of Scheduling Problems in Container Yards" (No. DPI2015-65895-R) financed by FEDER funds.Chen, L.; Li, X.; Ruiz García, R. (2020). Resource Renting for Periodical Cloud Workflow Applications. IEEE Transactions on Services Computing. 13(1):130-143. https://doi.org/10.1109/TSC.2017.2677450S13014313

    More Tolerant Reconstructed Networks by Self-Healing against Attacks in Saving Resource

    Full text link
    Complex network infrastructure systems for power-supply, communication, and transportation support our economical and social activities, however they are extremely vulnerable against the frequently increasing large disasters or attacks. Thus, a reconstructing from damaged network is rather advisable than empirically performed recovering to the original vulnerable one. In order to reconstruct a sustainable network, we focus on enhancing loops so as not to be trees as possible by node removals. Although this optimization is corresponded to an intractable combinatorial problem, we propose self-healing methods based on enhancing loops in applying an approximate calculation inspired from a statistical physics approach. We show that both higher robustness and efficiency are obtained in our proposed methods with saving the resource of links and ports than ones in the conventional healing methods. Moreover, the reconstructed network by healing can become more tolerant than the original one before attacks, when some extent of damaged links are reusable or compensated as investment of resource. These results will be open up the potential of network reconstruction by self-healing with adaptive capacity in the meaning of resilience.Comment: 23 pages, 6 figure

    Seafloor characterization using airborne hyperspectral co-registration procedures independent from attitude and positioning sensors

    Get PDF
    The advance of remote-sensing technology and data-storage capabilities has progressed in the last decade to commercial multi-sensor data collection. There is a constant need to characterize, quantify and monitor the coastal areas for habitat research and coastal management. In this paper, we present work on seafloor characterization that uses hyperspectral imagery (HSI). The HSI data allows the operator to extend seafloor characterization from multibeam backscatter towards land and thus creates a seamless ocean-to-land characterization of the littoral zone

    Scientific Workflow Scheduling for Cloud Computing Environments

    Get PDF
    The scheduling of workflow applications consists of assigning their tasks to computer resources to fulfill a final goal such as minimizing total workflow execution time. For this reason, workflow scheduling plays a crucial role in efficiently running experiments. Workflows often have many discrete tasks and the number of different task distributions possible and consequent time required to evaluate each configuration quickly becomes prohibitively large. A proper solution to the scheduling problem requires the analysis of tasks and resources, production of an accurate environment model and, most importantly, the adaptation of optimization techniques. This study is a major step toward solving the scheduling problem by not only addressing these issues but also optimizing the runtime and reducing monetary cost, two of the most important variables. This study proposes three scheduling algorithms capable of answering key issues to solve the scheduling problem. Firstly, it unveils BaRRS, a scheduling solution that exploits parallelism and optimizes runtime and monetary cost. Secondly, it proposes GA-ETI, a scheduler capable of returning the number of resources that a given workflow requires for execution. Finally, it describes PSO-DS, a scheduler based on particle swarm optimization to efficiently schedule large workflows. To test the algorithms, five well-known benchmarks are selected that represent different scientific applications. The experiments found the novel algorithms solutions substantially improve efficiency, reducing makespan by 11% to 78%. The proposed frameworks open a path for building a complete system that encompasses the capabilities of a workflow manager, scheduler, and a cloud resource broker in order to offer scientists a single tool to run computationally intensive applications
    corecore