26,076 research outputs found
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Autonomic Cloud Computing: Open Challenges and Architectural Elements
As Clouds are complex, large-scale, and heterogeneous distributed systems,
management of their resources is a challenging task. They need automated and
integrated intelligent strategies for provisioning of resources to offer
services that are secure, reliable, and cost-efficient. Hence, effective
management of services becomes fundamental in software platforms that
constitute the fabric of computing Clouds. In this direction, this paper
identifies open issues in autonomic resource provisioning and presents
innovative management techniques for supporting SaaS applications hosted on
Clouds. We present a conceptual architecture and early results evidencing the
benefits of autonomic management of Clouds.Comment: 8 pages, 6 figures, conference keynote pape
Fairness-aware scheduling on single-ISA heterogeneous multi-cores
Single-ISA heterogeneous multi-cores consisting of small (e.g., in-order) and big (e.g., out-of-order) cores dramatically improve energy- and power-efficiency by scheduling workloads on the most appropriate core type. A significant body of recent work has focused on improving system throughput through scheduling. However, none of the prior work has looked into fairness. Yet, guaranteeing that all threads make equal progress on heterogeneous multi-cores is of utmost importance for both multi-threaded and multi-program workloads to improve performance and quality-of-service. Furthermore, modern operating systems affinitize workloads to cores (pinned scheduling) which dramatically affects fairness on heterogeneous multi-cores. In this paper, we propose fairness-aware scheduling for single-ISA heterogeneous multi-cores, and explore two flavors for doing so. Equal-time scheduling runs each thread or workload on each core type for an equal fraction of the time, whereas equal-progress scheduling strives at getting equal amounts of work done on each core type. Our experimental results demonstrate an average 14% (and up to 25%) performance improvement over pinned scheduling through fairness-aware scheduling for homogeneous multi-threaded workloads; equal-progress scheduling improves performance by 32% on average for heterogeneous multi-threaded workloads. Further, we report dramatic improvements in fairness over prior scheduling proposals for multi-program workloads, while achieving system throughput comparable to throughput-optimized scheduling, and an average 21% improvement in throughput over pinned scheduling
The building information modeling for the retrofitting of existing buildings. A case study in the University of Cagliari.
Italy's very consistent buildings stock has become the major field for real estate investments and for the related projects and actions. The urge of working on built environment is however facing some crucial issues. The first is the lack of documentation on the construction history and on the real constructive layout of existing buildings (in terms of components, installations, plants, etc.). The second is the poor activity in surveying their current status, with reference to use (energy behaviour, real consumptions, etc.) and maintenance (conservation status, previous maintenance works, compliance with current regulations, etc.). These obstacles cause a deep inefficiency in the planning, programming and controlling of requalification and/or refunctionalisation works. Starting from these assumptions, this paper shows the findings of a research shared by the Politecnico of Milan and the Department of Civil and Environmental Engineering and Architecture of the University of Cagliari. It is aimed at testing the use of building information modeling (BIM) to structure the necessary knowledge to evaluate intervention scenarios. The research is focused on the Mandolesi Pavilion of the University of Cagliari, designed by Enrico Mandolesi. It is a highly stimulating architectural object because it incorporates values that require a conservative approach, but at the same time, like most contemporary buildings, it was designed and built for innovation and not for “long duration”. The work has actually led to the realization of a BIM model of the case study. It represents the first prefiguration of an approach that develops from construction history and continues with advanced diagnostics on the statical and energy performances of the building. The model formalizes knowledge and information on a significant building, aimed at its management. It allows also the setting of intervention scenarios that can be evaluated with real-time simulations of cost, time and ROI
Optimizing CMS build infrastructure via Apache Mesos
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC)
at CERN consists of 6M lines of in-house code, developed over a decade by
nearly 1000 physicists, as well as a comparable amount of general use
open-source code. A critical ingredient to the success of the construction and
early operation of the WLCG was the convergence, around the year 2000, on the
use of a homogeneous environment of commodity x86-64 processors and Linux.
Apache Mesos is a cluster manager that provides efficient resource isolation
and sharing across distributed applications, or frameworks. It can run Hadoop,
Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of
nodes. We present how we migrated our continuos integration system to schedule
jobs on a relatively small Apache Mesos enabled cluster and how this resulted
in better resource usage, higher peak performance and lower latency thanks to
the dynamic scheduling capabilities of Mesos.Comment: Submitted to proceedings of the 21st International Conference on
Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa, Japa
SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions
Cloud computing systems promise to offer subscription-oriented,
enterprise-quality computing services to users worldwide. With the increased
demand for delivering services to a large number of users, they need to offer
differentiated services to users and meet their quality expectations. Existing
resource management systems in data centers are yet to support Service Level
Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to
realize cloud computing and utility computing. In addition, no work has been
done to collectively incorporate customer-driven service management,
computational risk management, and autonomic resource management into a
market-based resource management system to target the rapidly changing
enterprise requirements of Cloud computing. This paper presents vision,
challenges, and architectural elements of SLA-oriented resource management. The
proposed architecture supports integration of marketbased provisioning policies
and virtualisation technologies for flexible allocation of resources to
applications. The performance results obtained from our working prototype
system shows the feasibility and effectiveness of SLA-based resource
provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE
International Conference on Cloud and Service Computing (CSC 2011, IEEE
Press, USA), Hong Kong, China, December 12-14, 201
Supporting decision-making in the building life-cycle using linked building data
The interoperability challenge is a long-standing challenge in the domain of architecture, engineering and construction (AEC). Diverse approaches have already been presented for addressing this challenge. This article will look into the possibility of addressing the interoperability challenge in the building life-cycle with a linked data approach. An outline is given of how linked data technologies tend to be deployed, thereby working towards a “more holistic” perspective on the building, or towards a large-scale web of “linked building data”. From this overview, and the associated use case scenarios, we conclude that the interoperability challenge cannot be “solved” using linked data technologies, but that it can be addressed. In other words, information exchange and management can be improved, but a pragmatic usage of technologies is still required in practice. Finally, we give an initial outline of some anticipated use cases in the building life-cycle in which the usage of linked data technologies may generate advantages over existing technologies and methods
- …