10 research outputs found

    Cost-Efficient Scheduling for Deadline Constrained Grid Workflows

    Get PDF
    Cost optimization for workflow scheduling while meeting deadline is one of the fundamental problems in utility computing. In this paper, a two-phase cost-efficient scheduling algorithm called critical chain is presented. The proposed algorithm uses the concept of slack time in both phases. The first phase is deadline distribution over all tasks existing in the workflow which is done considering critical path properties of workflow graphs. Critical chain uses slack time to iteratively select most critical sequence of tasks and then assigns sub-deadlines to those tasks. In the second phase named mapping step, it tries to allocate a server to each task considering task's sub-deadline. In the mapping step, slack time priority in selecting ready task is used to reduce deadline violation. Furthermore, the algorithm tries to locally optimize the computation and communication costs of sequential tasks exploiting dynamic programming. After proposing the scheduling algorithm, three measures for the superiority of a scheduling algorithm are introduced, and the proposed algorithm is compared with other existing algorithms considering the measures. Results obtained from simulating various systems show that the proposed algorithm outperforms four well-known existing workflow scheduling algorithms

    The Contemporary Review of Notable Cloud Resource Scheduling Strategies

    Get PDF
    Cloud computing has become a revolutionary development that has changed the dynamics of business for the organizations and in IT infrastructure management. While in one dimension, it has improved the scope of access, reliability, performance and operational efficiency, in the other dimension, it has created a paradigm shift in the way IT systems are managed in an organizational environment. However, with the increasing demand for cloud based solutions, there is significant need for improving the operational efficiency of the systems and cloud based services that are offered to the customers. As cloud based solutions offer finite pool of virtualized on-demand resources, there is imperative need for the service providers to focus on effective and optimal resource scheduling systems that could support them in offering reliable and timely service, workload balancing, optimal power efficiency and performance excellence. There are numerous models of resource scheduling algorithms that has been proposed in the earlier studies, and in this study the focus is upon reviewing varied range of resource scheduling algorithms that could support in improving the process efficiency. In this manuscript, the focus is upon evaluating various methods that could be adapted in terms of improving the resource scheduling solutions

    Estrategias de planificaci贸n para datos y procesos en computaci贸n Grid: estado del arte

    Get PDF
    La distribuci贸n y la naturaleza compartida y heterog茅nea de la computaci贸n grid, hace que su objetivo de ofrecer aplicaciones con poder computacional colectivo, sea un gran reto. El objetivo es presentar el resultado de una investigaci贸n sistem谩tica sobre aspectos te贸ricos de las tem谩ticas que convergen en el dise帽o y desarrollo de planificadores de datos y procesos en computaci贸n grid. Se tom贸 como referente la metodolog铆a planteada para desarrollo de estados de arte, que cuenta con dos fases principales: heur铆stica y hermen茅utica. En el proceso se analizaron los fundamentos te贸ricos, tales como: la arquitectura, las etapas que conforman el proceso realizado por parte de un planificador grid y los tipos de planificaci贸n existentes, todo ello con el objeto de abordar los diferentes modelos y enfoques computacionales, heur铆sticos y meta heur铆sticos que permiten un funcionamiento eficiente y eficaz de manera 贸ptima de los planificadores grid, permitiendo mitigaren cierto grado los problemas presentados en la planificaci贸n grid. La optimizaci贸n de la planificaci贸n de recursos grid, es un 谩rea que se encuentra en desarrollo, dado que las problem谩ticas principales como: demora en la planificaci贸n y el proceso de optimizaci贸n, est谩n a煤n en etapa de desarrollo

    Software development in the post-PC era : towards software development as a service

    Get PDF
    PhD ThesisEngineering software systems is a complex task which involves various stakeholders and requires planning and management to succeed. As the role of software in our daily life is increasing, the complexity of software systems is increasing. Throughout the short history of software engineering as a discipline, the development practises and methods have rapidly evolved to seize opportunities enabled by new technologies (e.g., the Internet) and to overcome economical challenges (e.g., the need for cheaper and faster development). Today, we are witnessing the Post-PC era. An era which is characterised by mobility and services. An era which removes organisational and geographical boundaries. An era which changes the functionality of software systems and requires alternative methods for conceiving them. In this thesis, we envision to execute software development processes in the cloud. Software processes have a software production aspect and a management aspect. To the best of our knowledge, there are no academic nor industrial solutions supporting the entire software development process life-cycle(from both production and management aspects and its tool-chain execution in the cloud. Our vision is to use the cloud economies of scale and leverage Model-Driven Engineering (MDE) to integrate production and management aspects into the development process. Since software processes are seen as workflows, we investigate using existing Workflow Management Systems to execute software processes and we find that these systems are not suitable. Therefore, we propose a reference architecture for Software Development as a Service (SDaaS). The SDaaS reference architecture is the first proposal which fully supports development of complex software systems in the cloud. In addition to the reference architecture, we investigate three specific related challenges and propose novel solutions addressing them. These challenges are: Modelling & enacting cloud-based executable software processes. Executing software processes in the cloud can bring several benefits to software develop ment. In this thesis, we discuss the benefits and considerations of cloud-based software processes and introduce a modelling language for modelling such processes. We refer to this language as EXE-SPEM. It extends the Software and Systems Process Engineering (SPEM2.0) OMG standard to support creating cloudbased executable software process models. Since EXE-SPEM is a visual modelling language, we introduce an XML notation to represent EXE-SPEM models in a machine-readable format and provide mapping rules from EXE-SPEM to this notation. We demonstrate this approach by modelling an example software process using EXE-SPEM and mapping it to the XML notation. Software process models expressed in this XML format can then be enacted in the proposed SDaaS architecture. Cost-e cient scheduling of software processes execution in the cloud. Software process models are enacted in the SDaaS architecture as workflows. We refer to them sometimes as Software Workflows. Once we have executable software process models, we need to schedule them for execution. In a setting where multiple software workflows (and their activities) compete for shared computational resources (workflow engines), scheduling workflow execution becomes important. Workflow scheduling is an NP-hard problem which refers to the allocation of su cient resources (human or computational) to workflow activities. The schedule impacts the workflow makespan (execution time) and cost as well as the computational resources utilisation. The target of the scheduling is to reduce the process execution cost in the cloud without significantly a ecting the process makespan while satisfying the special requirements of each process activity (e.g., executing on a private cloud). We adapt three workflow scheduling algorithms to fit for SDaaS and propose a fourth one; the Proportional Adaptive Task Schedule. The algorithms are then evaluated through simulation. The simulation results show that the our proposed algorithm saves between 19.74% and 45.78% of the execution cost, provides best resource (VM) utilisation and provides the second best makespan compared to the other presented algorithms. Evaluating the SDaaS architecture using a case study from the safety-critical systems domain. To evaluate the proposed SDaaS reference architecture, we instantiate a proof-of-concept implementation of the architecture. This imple mentation is then used to enact safety-critical processes as a case study. Engineering safety-critical systems is a complex task which involves multiple stakeholders. It requires shared and scalable computation to systematically involve geographically distributed teams. In this case study, we use EXE-SPEM to model a portion of a process (namely; the Preliminary System Safety Assessment - PSSA) adapted from the ARP4761 [2] aerospace standard. Then, we enact this process model in the proof-of-concept SDaaS implementation. By using the SDaaS architecture, we demonstrate the feasibility of our approach and its applicability to di erent domains and to customised processes. We also demonstrate the capability of EXE-SPEM to model cloud-based executable processes. Furthermore, we demonstrate the added value of the process models and the process execution provenance data recorded by the SDaaS architecture. This data is used to automate the generation of safety cases argument fragments. Thus, reducing the development cost and time. Finally, the case study shows that we can integrate some existing tools and create new ones as activities used in process models. The proposed SDaaS reference architecture (combined with its modelling, scheduling and enactment capabilities) brings the benefits of the cloud to software development. It can potentially save software production cost and provide an accessible platform that supports collaborating teams (potentially across di erent locations). The executable process models support unified interpretation and execution of processes across team(s) members. In addition, the use of models provide managers with global awareness and can be utilised for quality assurance and process metrics analysis and improvement. We see the contributions provided in this thesis as a first step towards an alternative development method that uses the benefits of cloud and Model-Driven Engineering to overcome existing challenges and open new opportunities. However, there are several challenges that are outside the scope of this study which need to be addressed to allow full support of the SDaaS vision (e.g., supporting interactive workflows). The solutions provided in this thesis address only part of a bigger vision. There is also a need for empirical and usability studies to study the impact of the SDaaS architecture on both the produced products (in terms of quality, cost, time, etc.) and the participating stakeholders

    Scientific Workflow Scheduling for Cloud Computing Environments

    Get PDF
    The scheduling of workflow applications consists of assigning their tasks to computer resources to fulfill a final goal such as minimizing total workflow execution time. For this reason, workflow scheduling plays a crucial role in efficiently running experiments. Workflows often have many discrete tasks and the number of different task distributions possible and consequent time required to evaluate each configuration quickly becomes prohibitively large. A proper solution to the scheduling problem requires the analysis of tasks and resources, production of an accurate environment model and, most importantly, the adaptation of optimization techniques. This study is a major step toward solving the scheduling problem by not only addressing these issues but also optimizing the runtime and reducing monetary cost, two of the most important variables. This study proposes three scheduling algorithms capable of answering key issues to solve the scheduling problem. Firstly, it unveils BaRRS, a scheduling solution that exploits parallelism and optimizes runtime and monetary cost. Secondly, it proposes GA-ETI, a scheduler capable of returning the number of resources that a given workflow requires for execution. Finally, it describes PSO-DS, a scheduler based on particle swarm optimization to efficiently schedule large workflows. To test the algorithms, five well-known benchmarks are selected that represent different scientific applications. The experiments found the novel algorithms solutions substantially improve efficiency, reducing makespan by 11% to 78%. The proposed frameworks open a path for building a complete system that encompasses the capabilities of a workflow manager, scheduler, and a cloud resource broker in order to offer scientists a single tool to run computationally intensive applications

    Information fusion architectures for security and resource management in cyber physical systems

    Get PDF
    Data acquisition through sensors is very crucial in determining the operability of the observed physical entity. Cyber Physical Systems (CPSs) are an example of distributed systems where sensors embedded into the physical system are used in sensing and data acquisition. CPSs are a collaboration between the physical and the computational cyber components. The control decisions sent back to the actuators on the physical components from the computational cyber components closes the feedback loop of the CPS. Since, this feedback is solely based on the data collected through the embedded sensors, information acquisition from the data plays an extremely vital role in determining the operational stability of the CPS. Data collection process may be hindered by disturbances such as system faults, noise and security attacks. Hence, simple data acquisition techniques will not suffice as accurate system representation cannot be obtained. Therefore, more powerful methods of inferring information from collected data such as Information Fusion have to be used. Information fusion is analogous to the cognitive process used by humans to integrate data continuously from their senses to make inferences about their environment. Data from the sensors is combined using techniques drawn from several disciplines such as Adaptive Filtering, Machine Learning and Pattern Recognition. Decisions made from such combination of data form the crux of information fusion and differentiates it from a flat structured data aggregation. In this dissertation, multi-layered information fusion models are used to develop automated decision making architectures to service security and resource management requirements in Cyber Physical Systems --Abstract, page iv
    corecore