22,142 research outputs found
SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis
In this paper, we propose a novel approach, called SENATUS, for joint traffic
anomaly detection and root-cause analysis. Inspired from the concept of a
senate, the key idea of the proposed approach is divided into three stages:
election, voting and decision. At the election stage, a small number of
\nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{,
which are used} to represent approximately the total (usually huge) set of
traffic flows. In the voting stage, anomaly detection is applied on the senator
flows and the detected anomalies are correlated to identify the most possible
anomalous time bins. Finally in the decision stage, a machine learning
technique is applied to the senator flows of each anomalous time bin to find
the root cause of the anomalies. We evaluate SENATUS using traffic traces
collected from the Pan European network, GEANT, and compare against another
approach which detects anomalies using lossless compression of traffic
histograms. We show the effectiveness of SENATUS in diagnosing anomaly types:
network scans and DoS/DDoS attacks
Highly comparative feature-based time-series classification
A highly comparative, feature-based approach to time series classification is
introduced that uses an extensive database of algorithms to extract thousands
of interpretable features from time series. These features are derived from
across the scientific time-series analysis literature, and include summaries of
time series in terms of their correlation structure, distribution, entropy,
stationarity, scaling properties, and fits to a range of time-series models.
After computing thousands of features for each time series in a training set,
those that are most informative of the class structure are selected using
greedy forward feature selection with a linear classifier. The resulting
feature-based classifiers automatically learn the differences between classes
using a reduced number of time-series properties, and circumvent the need to
calculate distances between time series. Representing time series in this way
results in orders of magnitude of dimensionality reduction, allowing the method
to perform well on very large datasets containing long time series or time
series of different lengths. For many of the datasets studied, classification
performance exceeded that of conventional instance-based classifiers, including
one nearest neighbor classifiers using Euclidean distances and dynamic time
warping and, most importantly, the features selected provide an understanding
of the properties of the dataset, insight that can guide further scientific
investigation
Recommended from our members
Expertise and the interpretation of computerized physiological data: implications for the design of computerized monitoring in neonatal intensive care
This paper presents the outcomes from a cognitive engineering project addressing the design problems of computerized monitoring in neonatal intensive care. Cognitive engineering is viewed, in this project, as a symbiosis between cognitive science and design practice. A range of methodologies has been used: interviews with neonatal staff, ward observations and experimental techniques. The results of these investigations are reported, focusing specifically on the differences between junior and senior physicians in their interpretation of monitored physiological data. It was found that the senior doctors made better use of the different knowledge sources available than the junior doctors. The senior doctors were able to identify more relevant physiological patterns and generated more and better inferences than did their junior colleagues. Expertise differences are discussed in the context of previous psychological research in medical expertise. Finally, the paper discusses the potential utility of these outcomes to inform the design of computerized decision support in neonatal intensive care
- …