3 research outputs found
Towards Operator-less Data Centers Through Data-Driven, Predictive, Proactive Autonomics
Continued reliance on human operators for managing data centers is a major
impediment for them from ever reaching extreme dimensions. Large computer
systems in general, and data centers in particular, will ultimately be managed
using predictive computational and executable models obtained through
data-science tools, and at that point, the intervention of humans will be
limited to setting high-level goals and policies rather than performing
low-level operations. Data-driven autonomics, where management and control are
based on holistic predictive models that are built and updated using live data,
opens one possible path towards limiting the role of operators in data centers.
In this paper, we present a data-science study of a public Google dataset
collected in a 12K-node cluster with the goal of building and evaluating
predictive models for node failures. Our results support the practicality of a
data-driven approach by showing the effectiveness of predictive models based on
data found in typical data center logs. We use BigQuery, the big data SQL
platform from the Google Cloud suite, to process massive amounts of data and
generate a rich feature set characterizing node state over time. We describe
how an ensemble classifier can be built out of many Random Forest classifiers
each trained on these features, to predict if nodes will fail in a future
24-hour window. Our evaluation reveals that if we limit false positive rates to
5%, we can achieve true positive rates between 27% and 88% with precision
varying between 50% and 72%.This level of performance allows us to recover
large fraction of jobs' executions (by redirecting them to other nodes when a
failure of the present node is predicted) that would otherwise have been wasted
due to failures. [...
DCDB Wintermute: Enabling Online and Holistic Operational Data Analytics on HPC Systems
As we approach the exascale era, the size and complexity of HPC systems
continues to increase, raising concerns about their manageability and
sustainability. For this reason, more and more HPC centers are experimenting
with fine-grained monitoring coupled with Operational Data Analytics (ODA) to
optimize efficiency and effectiveness of system operations. However, while
monitoring is a common reality in HPC, there is no well-stated and
comprehensive list of requirements, nor matching frameworks, to support
holistic and online ODA. This leads to insular ad-hoc solutions, each
addressing only specific aspects of the problem.
In this paper we propose Wintermute, a novel generic framework to enable
online ODA on large-scale HPC installations. Its design is based on the results
of a literature survey of common operational requirements. We implement
Wintermute on top of the holistic DCDB monitoring system, offering a large
variety of configuration options to accommodate the varying requirements of ODA
applications. Moreover, Wintermute is based on a set of logical abstractions to
ease the configuration of models at a large scale and maximize code re-use. We
highlight Wintermute's flexibility through a series of practical case studies,
each targeting a different aspect of the management of HPC systems, and then
demonstrate the small resource footprint of our implementation.Comment: Accepted for publication at the 29th ACM International Symposium on
High-Performance Parallel and Distributed Computing (HPDC 2020