28 research outputs found
SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis
In this paper, we propose a novel approach, called SENATUS, for joint traffic
anomaly detection and root-cause analysis. Inspired from the concept of a
senate, the key idea of the proposed approach is divided into three stages:
election, voting and decision. At the election stage, a small number of
\nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{,
which are used} to represent approximately the total (usually huge) set of
traffic flows. In the voting stage, anomaly detection is applied on the senator
flows and the detected anomalies are correlated to identify the most possible
anomalous time bins. Finally in the decision stage, a machine learning
technique is applied to the senator flows of each anomalous time bin to find
the root cause of the anomalies. We evaluate SENATUS using traffic traces
collected from the Pan European network, GEANT, and compare against another
approach which detects anomalies using lossless compression of traffic
histograms. We show the effectiveness of SENATUS in diagnosing anomaly types:
network scans and DoS/DDoS attacks
Increasing resilience of ATM networks using traffic monitoring and automated anomaly analysis
Systematic network monitoring can be the cornerstone for
the dependable operation of safety-critical distributed
systems. In this paper, we present our vision for informed
anomaly detection through network monitoring and
resilience measurements to increase the operators'
visibility of ATM communication networks. We raise the
question of how to determine the optimal level of
automation in this safety-critical context, and we present a
novel passive network monitoring system that can reveal
network utilisation trends and traffic patterns in diverse
timescales. Using network measurements, we derive
resilience metrics and visualisations to enhance the
operators' knowledge of the network and traffic behaviour,
and allow for network planning and provisioning based on
informed what-if analysis
Detecting Flow Anomalies in Distributed Systems
Deep within the networks of distributed systems, one often finds anomalies
that affect their efficiency and performance. These anomalies are difficult to
detect because the distributed systems may not have sufficient sensors to
monitor the flow of traffic within the interconnected nodes of the networks.
Without early detection and making corrections, these anomalies may aggravate
over time and could possibly cause disastrous outcomes in the system in the
unforeseeable future. Using only coarse-grained information from the two end
points of network flows, we propose a network transmission model and a
localization algorithm, to detect the location of anomalies and rank them using
a proposed metric within distributed systems. We evaluate our approach on
passengers' records of an urbanized city's public transportation system and
correlate our findings with passengers' postings on social media microblogs.
Our experiments show that the metric derived using our localization algorithm
gives a better ranking of anomalies as compared to standard deviation measures
from statistical models. Our case studies also demonstrate that transportation
events reported in social media microblogs matches the locations of our detect
anomalies, suggesting that our algorithm performs well in locating the
anomalies within distributed systems
Recommended from our members
Monitoring energy consumption with SIOX
In the face of the growing complexity of HPC systems, their growing energy costs, and the increasing difficulty to run applications efficiently, a number of monitoring tools have been developed during the last years. SIOX is one such endeavor, with a uniquely holistic approach: Not only does it aim to record a certain kind of data, but to make all relevant data available for analysis and optimization. Among other sources, this encompasses data from hardware energy counters and trace data from different hardware/software layers. However, not all data that can be recorded should be recorded. As such, SIOX needs good heuristics to determine when and what data needs to be collected, and the energy consumption can provide an important signal about when the system is in a state that deserves closer attention. In this paper, we show that SIOX can use Likwid to collect and report the energy consumption of applications, and present how this data can be visualized using SIOX’s web-interface. Furthermore, we outline how SIOX can use this information to intelligently adjust the amount of data it collects, allowing it to reduce the monitoring overhead while still providing complete information about critical situations
Methodology of Acquiring Valid Data by Combining Oil Tankers’ Noon Report and Automatic Identification System Satellite Data
Fuel consumption of marine vessels plays an important role in both generating air pollution and ship operational expenses where the global environmental concerns toward air pollution and economics of shipping operation are being increased. In order to optimize ship fuel consumption, the fuel consumption prediction for her envisaged voyage is to be known. To predict fuel consumption of a ship, noon report (NR) data are available source to be analysed by different techniques. Because of the possible human error attributed to the method of NR data collection, it involves risk of possible inaccuracy. Therefore, in this study, to acquire pure valid data, the NR raw data of two very large crude carriers (VLCCs) composed with their respective Automatic Identification System (AIS) satellite data. Then, well-known models i.e. K-Mean, Self-Organizing Map (SOM), Outlier Score Base (OSB) and Histogram of Outlier Score Base (HSOB) methods are applied to the collected tankers NR during a year. The new enriched data derived are compared to the raw NR to distinguish the most fitted methodology of accruing pure valid data. Expected value and root mean square methods are applied to evaluate the accuracy of the methodologies. It is concluded that measured expected value and root mean square for HOSB are indicating high coherence with the harmony of the primary NR data.</p
WK-FNN DESIGN FOR DETECTION OF ANOMALIES IN THE COMPUTER NETWORK TRAFFIC
Anomaly-based intrusion detection systems identify abnormal computer network traffic based on deviations from the derived statistical model that describes the normal network behavior. The basic problem with anomaly detection is deciding what is considered normal. Supervised machine learning can be viewed as binary classification, since models are trained and tested on a data set containing a binary label to detect anomalies. Weighted k-Nearest Neighbor and Feedforward Neural Network are high-precision classifiers for decision-making. However, their decisions sometimes differ. In this paper, we present a WK-FNN hybrid model for the detection of the opposite decisions. It is shown that results can be improved with the xor bitwise operation. The sum of the binary “ones” is used to decide whether additional alerts are activated or not