4,150 research outputs found
Anomaly Detection and Exploratory Causal Analysis for SAP HANA
Nowadays, the good functioning of the equipment, networks and systems will be the key for the business of a company to continue operating because it is never avoidable for the companies to use information technology to support their business in the era of big data. However, the technology is never infallible, faults that give rise to sometimes critical situations may appear at any time. To detect and prevent failures, it is very essential to have a good monitoring system which is responsible for controlling the technology used by a company (hardware, networks and communications, operating systems or applications, among others) in order to analyze their operation and performance, and to detect and alert about possible errors. The aim of this thesis is thus to further advance the field of anomaly detection and exploratory causal inference which are two major research areas in a monitoring system, to provide efficient algorithms with regards to the usability, maintainability and scalability. The analyzed results can be viewed as a starting point for the root cause analysis of the system performance issues and to avoid falls in the system or minimize the time of resolution of the issues in the future. The algorithms were performed on the historical data of SAP HANA database at last and the results gained in this thesis indicate that the tools have succeeded in providing some useful information for diagnosing the performance issues of the system
Monitoring and analysis system for performance troubleshooting in data centers
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was
not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance
troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming.
To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers.
VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel
software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By
running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the
performance issue.
VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found
via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus
with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.Ph.D
Recommended from our members
Knowledge Discovery and Data Mining for Shared Mobility and Connected and Automated Vehicle Applications
The rapid development of shared mobility and connected and automated vehicles (CAVs) has not only brought new intelligent transportation system (ITS) challenges with the new types of mobility, but also brought a huge opportunity to accelerate the connectivity and informatization of transportation systems, particularly when we consider all the new forms of data that is becoming available. The primary challenge is how to take advantage of the enormous amount of data to discover knowledge, build effective models, and develop impactful applications. With the theoretical and experimental progress being made over the last two decades, data mining and machine learning technologies have become key approaches for parsing data, understanding information, and making informed decisions, especially as the rise of deep learning algorithms bringing new levels of performance to the analysis of large datasets. The combination of data mining and ITS can greatly benefit research and advances in shared mobility and CAVs.This dissertation focuses on knowledge discovery and data mining for shared mobility and CAV applications. When considering big data associated with shared mobility operations and CAV research, data mining techniques can be customized with transportation knowledge to initially parse the data. Then machine learning methods can be used to model the parsed data to elicit hidden knowledge. Finally, the discovered knowledge and extracted information can help in the development of effective shared mobility and CAV applications to achieve the goals of a safer, faster, and more eco-friendly transportation systems.In this dissertation, there are four main sections that are addressed. First, new methodologies are introduced for extracting lane-level road features from rough crowdsourced GPS trajectories via data mining, which is subsequently used as the fundamental information for CAV applications. The proposed method results in decimeter level accuracy, which satisfies the positioning needs for many macroscopic and microscopic shared mobility and CAV applications. Second, macroscopic ride-hailing service big data has been analyzed for demand prediction, vehicle operation, and system efficiency monitoring. The proposed deep learning algorithms increase the ride-hailing demand prediction accuracy to 80% and can help the fleet dispatching system reduce 30% of vacant travel distance. Third, microscopic automated vehicle perception data has been analyzed for a real-time computer vision system that can be used for lane change behavior detection. The proposed deep learning design combines the residual neural network image input with time serious control data and reaches 95% of lane change behavior prediction accuracy. Last but not least, new ride sharing and CAV applications have been simulated in a behavior modeling framework to analyze the impact of mobility and energy consumption, which addresses key barriers by quantifying the transportation system-wide mobility, energy and behavior impacts from new mobility technologies using real-world data
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
Inferring functional connectivity from time-series of events in large scale network deployments
To respond rapidly and accurately to network and service outages, network operators must deal with a large number of events resulting from the interaction of various services operating on complex, heterogeneous and evolving networks. In this paper, we introduce the concept of functional connectivity as an alternative approach to monitoring those events. Commonly used in the study of brain dynamics, functional connectivity is defined in terms of the presence of statistical dependencies between nodes. Although a number of techniques exist to infer functional connectivity in brain networks, their straightforward application to commercial network deployments is severely challenged by: (a) non-stationarity of the functional connectivity, (b) sparsity of the time-series of events, and (c) absence of an explicit model describing how events propagate through the network or indeed whether they propagate. Thus, in this paper, we present a novel inference approach whereby two nodes are defined as forming a functional edge if they emit substantially more coincident or short-lagged events than would be expected if they were statistically independent. The output of the method is an undirected weighted graph, where the weight of an edge between two nodes denotes the strength of the statistical dependence between them. We develop a model of time-varying functional connectivity whose parameters are determined by maximising the model's predictive power from one time window to the next. We assess the accuracy, efficiency and scalability of our method on two real datasets of network events spanning multiple months and on synthetic data for which ground truth is available. We compare our method against both a general-purpose time-varying network inference method and network management specific causal inference technique and discuss its merits in terms of sensitivity, accuracy and, importantly, scalability
- …