577 research outputs found

    A Reference Architecture for Service Lifecycle Management – Construction and Application to Designing and Analyzing IT Support

    Get PDF
    Service-orientation and the underlying concept of service-oriented architectures are a means to successfully address the need for flexibility and interoperability of software applications, which in turn leads to improved IT support of business processes. With a growing level of diffusion, sophistication and maturity, the number of services and interdependencies is gradually rising. This increasingly requires companies to implement a systematic management of services along their entire lifecycle. Service lifecycle management (SLM), i.e., the management of services from the initiating idea to their disposal, is becoming a crucial success factor. Not surprisingly, the academic and practice communities increasingly postulate comprehensive IT support for SLM to counteract the inherent complexity. The topic is still in its infancy, with no comprehensive models available that help evaluating and designing IT support in SLM. This thesis presents a reference architecture for SLM and applies it to the evaluation and designing of SLM IT support in companies. The artifact, which largely resulted from consortium research efforts, draws from an extensive analysis of existing SLM applications, case studies, focus group discussions, bilateral interviews and existing literature. Formal procedure models and a configuration terminology allow adapting and applying the reference architecture to a company’s individual setting. Corresponding usage examples prove its applicability and demonstrate the arising benefits within various SLM IT support design and evaluation tasks. A statistical analysis of the knowledge embodied within the reference data leads to novel, highly significant findings. For example, contemporary standard applications do not yet emphasize the lifecycle concept but rather tend to focus on small parts of the lifecycle, especially on service operation. This forces user companies either into a best-of-breed or a custom-development strategy if they are to implement integrated IT support for their SLM activities. SLM software vendors and internal software development units need to undergo a paradigm shift in order to better reflect the numerous interdependencies and increasing intertwining within services’ lifecycles. The SLM architecture is a first step towards achieving this goal.:Content Overview List of Figures....................................................................................... xi List of Tables ...................................................................................... xiv List of Abbreviations.......................................................................xviii 1 Introduction .................................................................................... 1 2 Foundations ................................................................................... 13 3 Architecture Structure and Strategy Layer .............................. 57 4 Process Layer ................................................................................ 75 5 Information Systems Layer ....................................................... 103 6 Architecture Application and Extension ................................. 137 7 Results, Evaluation and Outlook .............................................. 195 Appendix ..........................................................................................203 References .......................................................................................... 463 Curriculum Vitae.............................................................................. 498 Bibliographic Data............................................................................ 49

    DEVELOPING A PROJECT MANAGER COMPETENCY MODEL TO BETTER SERVE THE WARFIGHTER AND THE DOD

    Get PDF
    As of today, the Department of Defense (DOD) project management competencies are structured differently from industry. Industry has made advancements in project management that the DOD does not currently take advantage of. By better aligning the DOD and PMI competency standards we can decrease cost, schedule, and performance issues. Based on previous research on the topic, the current DOD competency model is not sufficient for assessing today’s program managers. The purpose of this research is to use the three PMI industry standards to develop a survey tool to better serve the DOD acquisition workforce. We were able to create this survey tool and hope that, by using this survey tool, future research teams will be able to effectively gauge the acquisition community’s correlation between the three PMI standards and the current DOD workload. The information gathered from this research can be useful not only to DOD acquisition communities, but also can set future guidelines to program managers in order to save the DOD on schedule, cost, and performance.Civilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    Organic Service-Level Management in Service-Oriented Environments

    Get PDF
    Dynamic service-oriented environments (SOEs) are characterised by a large number of heterogeneous service components that are expected to support the business as a whole. The present work provides a negotiation-based approach to facilitate automated and multi-level service-level management in an SOE, where each component autonomously arranges its contribution to the whole operational goals. Evaluation experiments have shown an increased responsiveness and stability of an SOE in case of changes

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks

    Survey on Additive Manufacturing, Cloud 3D Printing and Services

    Full text link
    Cloud Manufacturing (CM) is the concept of using manufacturing resources in a service oriented way over the Internet. Recent developments in Additive Manufacturing (AM) are making it possible to utilise resources ad-hoc as replacement for traditional manufacturing resources in case of spontaneous problems in the established manufacturing processes. In order to be of use in these scenarios the AM resources must adhere to a strict principle of transparency and service composition in adherence to the Cloud Computing (CC) paradigm. With this review we provide an overview over CM, AM and relevant domains as well as present the historical development of scientific research in these fields, starting from 2002. Part of this work is also a meta-review on the domain to further detail its development and structure

    Improvement of Data-Intensive Applications Running on Cloud Computing Clusters

    Get PDF
    MapReduce, designed by Google, is widely used as the most popular distributed programming model in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobe have been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in cloud computing systems. This impedance is due to the uneven distribution of input data and computation load among cluster nodes, heterogeneous data nodes, data skew in reduce phase, resource contention situations, and network configurations. All these reasons may cause delay failure and the violation of job completion time. One of the key issues that can significantly affect the performance of cloud computing is the computation load balancing among cluster nodes. Replica placement in Hadoop distributed file system plays a significant role in data availability and the balanced utilization of clusters. In the current replica placement policy (RPP) of Hadoop distributed file system (HDFS), the replicas of data blocks cannot be evenly distributed across cluster\u27s nodes. The current HDFS must rely on a load balancing utility for balancing the distribution of replicas, which results in extra overhead for time and resources. This dissertation addresses data load balancing problem and presents an innovative replica placement policy for HDFS. It can perfectly balance the data load among cluster\u27s nodes. The heterogeneity of cluster nodes exacerbates the issue of computational load balancing; therefore, another replica placement algorithm has been proposed in this dissertation for heterogeneous cluster environments. The timing of identifying the straggler map task is very important for straggler mitigation in data-intensive cloud computing. To mitigate the straggler map task, Present progress and Feedback based Speculative Execution (PFSE) algorithm has been proposed in this dissertation. PFSE is a new straggler identification scheme to identify the straggler map tasks based on the feedback information received from completed tasks beside the progress of the current running task. Straggler reduce task aggravates the violation of MapReduce job completion time. Straggler reduce task is typically the result of bad data partitioning during the reduce phase. The Hash partitioner employed by Hadoop may cause intermediate data skew, which results in straggler reduce task. In this dissertation a new partitioning scheme, named Balanced Data Clusters Partitioner (BDCP), is proposed to mitigate straggler reduce tasks. BDCP is based on sampling of input data and feedback information about the current processing task. BDCP can assist in straggler mitigation during the reduce phase and minimize the job completion time in MapReduce jobs. The results of extensive experiments corroborate that the algorithms and policies proposed in this dissertation can improve the performance of data-intensive applications running on cloud platforms

    Cyber Supply Chain Risks in Cloud Computing - Bridging the Risk Assessment Gap

    Get PDF
    Cloud computing represents a significant paradigm shift in the delivery of information technology (IT) services. The rapid growth of the cloud and the increasing security concerns associated with the delivery of cloud services has led many researchers to study cloud risks and risk assessments. Some of these studies highlight the inability of current risk assessments to cope with the dynamic nature of the cloud, a gap we believe is as a result of the lack of consideration for the inherent risk of the supply chain. This paper, therefore, describes the cloud supply chain and investigates the effect of supply chain transparency in conducting a comprehensive risk assessment. We conducted an industry survey to gauge stakeholder awareness of supply chain risks, seeking to find out the risk assessment methods commonly used, factors that hindered a comprehensive evaluation and how the current state-of-the-art can be improved. The analysis of the survey dataset showed the lack of flexibility of the popular qualitative assessment methods in coping with the risks associated with the dynamic supply chain of cloud services, typically made up of an average of eight suppliers. To address these gaps, we propose a Cloud Supply Chain Cyber Risk Assessment (CSCCRA) model, a quantitative risk assessment model which is supported by decision support analysis and supply chain mapping in the identification, analysis and evaluation of cloud risks

    An investigation into specifying service level agreements for provisioning cloud computing services

    Get PDF
    Within the U.S. Department of Defense (DoD), service level agreements are a widely used tool for acquiring enterprise-level information technology (IT) resources. In order to contain, if not reduce, the total cost of ownership of IT resources to the enterprise, the DoD has undertaken outsourcing its IT needs to Cloud service providers. In this thesis, we explore how service level agreements are specified for non-Cloud-based services, followed by determining how to tailor those practices to specifying service level agreements for Cloud-based service provision, with a focus on end-to-end management of the service-provisioning.http://archive.org/details/aninvestigationi1094527852Civilian, United States Navy SPAWAR SSC PacificApproved for public release; distribution is unlimited
    • …
    corecore