189 research outputs found

    Variability in Behavior of Application Service Workload in a Utility Cloud

    Get PDF
    Using the elasticity feature of a utility cloud, users can acquire and release resources as required and pay for what they use. Applications with time-varying workloads can request for variable resources over time that makes cloud a convenient option for such applications. The elasticity in current IaaS cloud provides mainly two options to the users: horizontal and vertical scaling. In both ways of scaling the basic resource allocation unit is fixed-sized VM, it forces the cloud users to characterize their workload based on VM size, which might lead to under-utilization or over-allocation of resources. This turns out to be an inefficient model for both cloud users and providers. In this paper we discuss and calculate the variability in different kinds of application service workload. We also discuss different dynamic provisioning approaches proposed by researchers. We conclude with a brief introduction to the issues or limitations in existing solutions and our approach to resolve them in a way that is suitable and economic for both cloud user and provider

    Performance Analysis of Legacy Perl Software via Batch and Interactive Trace Visualization

    Get PDF
    Performing an analysis of established software usually is challenging. Based on reverse engineering through dynamic analysis, it is possible to perform a software performance analysis, in order to detect performance bottlenecks or issues. This process is often divided into two consecutive tasks. The first task concerns the monitoring of the legacy software, and the second task covers analysing and visualizing the results. Dynamic analysis is usually addressed via trace visualization, but finding an appropriate representation for a specific issue still remains a great challenge. In this paper we report on our performance analysis of the Perl-based open repository software EPrints, which has now been continuously developed for more than fifteen years. We analyse and evaluate the software using the Kieker monitoring framework, and apply and combine two types of visualization tools, namely Graphviz and Gephi. More precisely, we employ Kieker to reconstruct architectural models from recorded monitoring data, based on dynamic analysis, and Graphviz respectively Gephi for further analysis and visualization of our monitoring results. We acquired knowledge of the software through our instrumentation and analysis via Kieker and the combined visualization of the two aforementioned tools. This allowed us, in collaboration with the EPrints development team, to reverse engineer their software EPrints, to give new and unexpected insights, and to detect potential bottlenecks

    A Threat Model for Vehicular Fog Computing

    Get PDF
    Vehicular Fog Computing (VFC) facilitates the deployment of distributed, latency-aware services, residing between smart vehicles and cloud services. However, VFC systems are exposed to manifold security threats, putting human life at risk. Knowledge on such threats is scattered and lacks empirical validation. We performed an extensive threat assessment by reviewing literature and conducting expert interviews, leading to a comprehensive threat model with 33 attacks and example security mitigation strategies, among others. We thereby synthesize and extend prior research; provide rich descriptions for threats; and raise awareness of physical attacks that underline importance of the cyber-physical manifestation of VFC

    iObserve: Integrated Observation and Modeling Techniques to Support Adaptation and Evolution of Software Systems

    Get PDF
    The goal of iObserve is to develop methods and tools to support evolution and adaptation of long-lived software systems. Future long-living software systems will be engineered using third-party software services and infrastructures. Key challenges for such systems will be caused by dynamic changes of deployment options on cloud platforms. Third-party services and infrastructures are neither owned nor controlled by the users and developers of service-based systems. System users and developers are thus only able to observe third-party services and infrastructures via their interface, but are not able to look into the software and infrastructure that provides those services. In this technical report, we summarize our results of four activities to realize a complete tooling around Kieker, Palladio, and MAMBA, supporting performance and cost prediction, and the evaluation of data privacy in context of geo-locations. Furthermore, the report illustrates our efforts to extend Palladio

    Stigmergic interoperability for autonomic systems: Managing complex interactions in multi-manager scenarios

    Get PDF
    The success of autonomic computing has led to its popular use in many application domains, leading to scenarios where multiple autonomic managers (AMs) coexist, but without adequate support for interoperability. This is evident, for example, in the increasing number of large datacentres with multiple managers which are independently designed. The increase in scale and size coupled with heterogeneity of services and platforms means that more AMs could be integrated to manage the arising complexity. This has led to the need for interoperability between AMs. Interoperability deals with how to manage multi-manager scenarios, to govern complex coexistence of managers and to arbitrate when conflicts arise. This paper presents an architecture-based stigmergic interoperability solution. The solution presented in this paper is based on the Trustworthy Autonomic Architecture (TAArch) and uses stigmergy (the means of indirect communication via the operating environment) to achieve indirect coordination among coexisting agents. Usually, in stigmergy-based coordination, agents may be aware of the existence of other agents. In the approach presented here in, agents (autonomic managers) do not need to be aware of the existence of others. Their design assumes that they are operating in 'isolation' and they simply respond to changes in the environment. Experimental results with a datacentre multi-manager scenario are used to analyse the proposed approach

    Dynamic electricity pricing - Which programs do consumers prefer?

    Get PDF

    Efficient Use of Human-robot Collaboration in Packaging through Systematic Task Assignment

    Get PDF
    The ageing workforce in Germany is a major challenge for many companies in the assembly and packaging of high-quality products. Particularly when individual processes require an increased amount of force or precision, the employees can be overstressed over a long period, depending on their physical constitution. One way of supporting employees in these processes is human-robot collaboration, because stressful process steps can be automated in a targeted manner. With conventional automation, this is currently not economically possible for many processes, as human capabilities are required. In order to achieve a balanced cooperation based on partnership, as well as to use additional potentials and to consider restrictions such as process times, it is necessary to ensure a good division of tasks between human and machine. The methodical procedure of allocation presented in this paper is based on the recreation of the process from basic process modules conducted by the process planner. Subsequently, these processes are divided according to the respective capabilities and the underlying process requirements. The company-specific target parameters, such as an improvement in ergonomics, are taken into account. The assignment procedure is described in a practical use case in the packaging of high-quality electronic consumer goods. Furthermore, the use case demonstrates the applicability of the approach. For these purposes, the parameters and requirements of the initial and result state of the workplace are described. The procedure and the decisions of the approach are shown with regard to the achievable goals

    Virtualization in the Private Cloud: State of the Practice

    Get PDF
    Virtualization has become a mainstream technology that allows efficient and safe resource sharing in data centers. In this paper, we present a large scale workload characterization study of 90K virtual machines hosted on 8K physical servers, across several geographically distributed corporate data centers of a major service provider. The study focuses on 19 days of operation and focuses on the state of the practice, i. e., how virtual machines are deployed across different physical resources with an emphasis on processors and memory, focusing on resource sharing and usage of physical resources, virtual machine life cycles, and migration patterns and their frequencies. This paper illustrates that indeed there is a huge tendency in over-provisioning CPU and memory resources while certain virtualization features (e. g., migration and collocation) are used rather conservatively, showing that there is significant room for the development of policies that aim to reduce operational costs in data centers
    • …
    corecore