1,391 research outputs found
A model-driven approach to broaden the detection of software performance antipatterns at runtime
Performance antipatterns document bad design patterns that have negative
influence on system performance. In our previous work we formalized such
antipatterns as logical predicates that predicate on four views: (i) the static
view that captures the software elements (e.g. classes, components) and the
static relationships among them; (ii) the dynamic view that represents the
interaction (e.g. messages) that occurs between the software entities elements
to provide the system functionalities; (iii) the deployment view that describes
the hardware elements (e.g. processing nodes) and the mapping of the software
entities onto the hardware platform; (iv) the performance view that collects
specific performance indices. In this paper we present a lightweight
infrastructure that is able to detect performance antipatterns at runtime
through monitoring. The proposed approach precalculates such predicates and
identifies antipatterns whose static, dynamic and deployment sub-predicates are
validated by the current system configuration and brings at runtime the
verification of performance sub-predicates. The proposed infrastructure
leverages model-driven techniques to generate probes for monitoring the
performance sub-predicates and detecting antipatterns at runtime.Comment: In Proceedings FESCA 2014, arXiv:1404.043
Improving data center efficiency through smart grid integration and intelligent analytics
The ever-increasing growth of the demand in IT computing, storage and large-scale cloud services leads to the proliferation of data centers that consist of (tens of) thousands of servers. As a result, data centers are now among the largest electricity consumers worldwide. Data center energy and resource efficiency has started to receive significant attention due to its economical, environmental, and performance impacts. In tandem, facing increasing challenges in stabilizing the power grids due to growing needs of intermittent renewable energy integration, power market operators have started to offer a number of demand response (DR) opportunities for energy consumers (such as data centers) to receive credits by modulating their power consumption dynamically following specific requirements.
This dissertation claims that data centers have strong capabilities to emerge as major enablers of substantial electricity integration from renewables. The participation of data centers into emerging DR, such as regulation service reserves (RSRs), enables the growth of the data center in a sustainable, environmentally neutral, or even beneficial way, while also significantly reducing data center electricity costs. In this dissertation, we first model data center participation in DR, and then propose runtime policies to dynamically modulate data center power in response to independent system operator (ISO) requests, leveraging advanced server power and workload management techniques. We also propose energy and reserve bidding strategies to minimize the data center energy cost. Our results demonstrate that a typical data center can achieve up to 44% monetary savings in its electricity cost with RSR provision, dramatically surpassing savings achieved by traditional energy management strategies. In addition, we investigate the capabilities and benefits of various types of energy storage devices (ESDs) in DR. Finally, we demonstrate RSR provision in practice on a real server.
In addition to its contributions on improving data center energy efficiency, this dissertation also proposes a novel method to address data center management efficiency. We propose an intelligent system analytics approach, "discovery by example", which leverages fingerprinting and machine learning methods to automatically discover software and system changes. Our approach eases runtime data center introspection and reduces the cost of system management.2018-11-04T00:00:00
Elephant Flows Detection Using Deep Neural Network, Convolutional Neural Network, Long Short Term Memory and Autoencoder
Currently, the wide spreading of real-time applications such as VoIP and
videos-based applications require more data rates and reduced latency to ensure
better quality of service (QoS). A well-designed traffic classification
mechanism plays a major role for good QoS provision and network security
verification. Port-based approaches and deep packet inspections (DPI)
techniques have been used to classify and analyze network traffic flows.
However, none of these methods can cope with the rapid growth of network
traffic due to the increasing number of Internet users and the growth of real
time applications. As a result, these methods lead to network congestion,
resulting in packet loss, delay and inadequate QoS delivery. Recently, a deep
learning approach has been explored to address the time-consumption and
impracticality gaps of the above methods and maintain existing and future
traffics of real-time applications. The aim of this research is then to design
a dynamic traffic classifier that can detect elephant flows to prevent network
congestion. Thus, we are motivated to provide efficient bandwidth and fast
transmision requirements to many Internet users using SDN capability and the
potential of Deep Learning. Specifically, DNN, CNN, LSTM and Deep autoencoder
are used to build elephant detection models that achieve an average accuracy of
99.12%, 98.17%, and 98.78%, respectively. Deep autoencoder is also one of the
promising algorithms that does not require human class labeler. It achieves an
accuracy of 97.95% with a loss of 0.13 . Since the loss value is closer to
zero, the performance of the model is good. Therefore, the study has a great
importance to Internet service providers, Internet subscribers, as well as for
future researchers in this area.Comment: 27 page
Multi-Quality Auto-Tuning by Contract Negotiation
A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability.
In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible
- …