13,493 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Toward sustainable data centers: a comprehensive energy management strategy

    Get PDF
    Data centers are major contributors to the emission of carbon dioxide to the atmosphere, and this contribution is expected to increase in the following years. This has encouraged the development of techniques to reduce the energy consumption and the environmental footprint of data centers. Whereas some of these techniques have succeeded to reduce the energy consumption of the hardware equipment of data centers (including IT, cooling, and power supply systems), we claim that sustainable data centers will be only possible if the problem is faced by means of a holistic approach that includes not only the aforementioned techniques but also intelligent and unifying solutions that enable a synergistic and energy-aware management of data centers. In this paper, we propose a comprehensive strategy to reduce the carbon footprint of data centers that uses the energy as a driver of their management procedures. In addition, we present a holistic management architecture for sustainable data centers that implements the aforementioned strategy, and we propose design guidelines to accomplish each step of the proposed strategy, referring to related achievements and enumerating the main challenges that must be still solved.Peer ReviewedPostprint (author's final draft

    ATM automation: guidance on human technology integration

    Get PDF
    © Civil Aviation Authority 2016Human interaction with technology and automation is a key area of interest to industry and safety regulators alike. In February 2014, a joint CAA/industry workshop considered perspectives on present and future implementation of advanced automated systems. The conclusion was that whilst no additional regulation was necessary, guidance material for industry and regulators was required. Development of this guidance document was completed in 2015 by a working group consisting of CAA, UK industry, academia and industry associations (see Appendix B). This enabled a collaborative approach to be taken, and for regulatory, industry, and workforce perspectives to be collectively considered and addressed. The processes used in developing this guidance included: review of the themes identified from the February 2014 CAA/industry workshop1; review of academic papers, textbooks on automation, incidents and accidents involving automation; identification of key safety issues associated with automated systems; analysis of current and emerging ATM regulatory requirements and guidance material; presentation of emerging findings for critical review at UK and European aviation safety conferences. In December 2015, a workshop of senior management from project partner organisations reviewed the findings and proposals. EASA were briefed on the project before its commencement, and Eurocontrol contributed through membership of the Working Group.Final Published versio

    Space Station Human Factors Research Review. Volume 4: Inhouse Advanced Development and Research

    Get PDF
    A variety of human factors studies related to space station design are presented. Subjects include proximity operations and window design, spatial perceptual issues regarding displays, image management, workload research, spatial cognition, virtual interface, fault diagnosis in orbital refueling, and error tolerance and procedure aids

    DYVERSE: DYnamic VERtical Scaling in Multi-tenant Edge Environments

    Full text link
    Multi-tenancy in resource-constrained environments is a key challenge in Edge computing. In this paper, we develop 'DYVERSE: DYnamic VERtical Scaling in Edge' environments, which is the first light-weight and dynamic vertical scaling mechanism for managing resources allocated to applications for facilitating multi-tenancy in Edge environments. To enable dynamic vertical scaling, one static and three dynamic priority management approaches that are workload-aware, community-aware and system-aware, respectively are proposed. This research advocates that dynamic vertical scaling and priority management approaches reduce Service Level Objective (SLO) violation rates. An online-game and a face detection workload in a Cloud-Edge test-bed are used to validate the research. The merits of DYVERSE is that there is only a sub-second overhead per Edge server when 32 Edge servers are deployed on a single Edge node. When compared to executing applications on the Edge servers without dynamic vertical scaling, static priorities and dynamic priorities reduce SLO violation rates of requests by up to 4% and 12% for the online game, respectively, and in both cases 6% for the face detection workload. Moreover, for both workloads, the system-aware dynamic vertical scaling method effectively reduces the latency of non-violated requests, when compared to other methods

    Energy Efficiency in the ICT - Profiling Power Consumption in Desktop Computer Systems

    Get PDF
    Energy awareness in the ICT has become an important issue. Focusing on software, recent work suggested the existence of a relationship between power consumption, software configuration and usage patterns in computer systems. The aim of this work was collecting and analysing power consumption data of general-purpose computer systems, simulating common usage scenarios, in order to extract a power consumption profile for each scenario. We selected two desktop systems of different generations as test machines. Meanwhile, we developed 11 usage scenarios, and conducted several test runs of them, collecting power consumption data by means of a power meter. Our analysis resulted in an estimation of a power consumption value for each scenario and software application used, obtaining that each single scenario introduced an overhead from 2 to 11 Watts, which corresponds to a percentage increase that can reach up to 20% on recent and more powerful systems. We determined that software and its usage patterns impact consistently on the power consumption of computer systems. Further work will be devoted to evaluate how power consumption is affected by the usage of specific system resource

    Adaptive microservice scaling for elastic applications

    Get PDF

    Towards Identifying Performance Anomalies

    Get PDF
    AbstractLarge-scale-software systems (LSSs) are composed of hundreds of subsystems that interact with each other in an unforeseen and complex ways. The operators of these LSSs strictly monitor thousands of metrics (performance counters) to quickly identify performance anomalies before a catastrophe. The existing monitoring tools and methodologies have not kept in pace with the rapid growth and inherit complexity of these LSSs; hence are ineffective in assisting practitioners to effectively pinpoint performance anomalies. We propose a methodology that uses entropy analysis to assist practitioners/operators of LSSs in quickly detecting underlying anomalies in the system. Our performance tests conducted on an open source benchmark system reveal that the proposed methodology is robust in pinpointing anomalies, do not require any domain knowledge to operate, and avoid information overload on practitioners
    corecore